What The KnapSack Problem Teaches Us About Efficiency
Optimal efficiency for complex systems has been the Holy Grail of economics since day one. Like the Holy Grail, it has remained elusive throughout history. But things are now changing with breakthroughs from AI.
Optimal solutions to longstanding mathematical problems with names you might recognize – The Traveling Salesman Problem, The Bin-Packing Problem, or The Knapsack Problem – are now within our grasp.
Now, then, it is a good time to revisit the topic of optimal efficiency for complex systems for our clients. By the end of this article, you will hopefully glean insights for making your own tools and systems work better.
Everyone’s Happier When Tools Work Better
Combinatorial Optimization Problems (COPs)
For complex systems with such a large number of variables and interactions, such as managing workloads, prioritizing leads, or selecting partners, there might be millions of possible combinations to compare, with different costs and timings and knock-on effects.
When problems reach this magnitude, we turn to the field of combinatorial optimization problems (COPs) in mathematics.
Combinatorial optimization is a subset of math that intersects several fields including machine learning, software engineering, and complexity theory. Combinatorial optimization problems are defined by several distinct features:
- They are discrete
- There are a finite set of possibilities in the problem set to pick from
- The set of possibilities is defined by a number of restrictions (e.g. number of trucks, number of destinations, average speed of trucks etc)
- The goal is to maximize or minimize a specific objective (cost, speed, price, etc.)
Another characteristic of COPs is that their solutions are often as complex as the problems, and implementing those solutions requires a mathematical mindset. This article aims to teach you that mindset.
The classic example of a COP that we will explore and relate to your business is the knapsack problem.
The Knapsack Problem
You are tasked to fill a knapsack from a choice of items varying in size and value. Your goal is to maximize the total value of goods without exceeding the capacity of the knapsack. In layman’s terms, how much value can you fit inside your knapsack, given that each item requires so much space?
Abstractions like The Knapsack Problem can enlighten our clients about optimizing their HubSpot portal, personnel, and processes. For example, you might view your sales team’s daily capacity to work leads, or the agenda of a marketing event, as a knapsack to be filled.
So how can you fill your knapsack most intelligently?
Since ancient times, the world has used a mishmash of techniques to solve COPs like this. While modern technology brings us closer than ever before, the perfect method still remains unseen.
But the purpose of this article is not to give you a perfect solution to the Knapsack Problem. This article’s purpose is to share with you the wisdom humanity has gained from our crusade for that solution - a new way of thinking about complexity.
Just like every good adventure movie, the true prize was not the boon we set out for, but the lessons we learned during the journey!
So, let’s follow the journey of the many methods we’ve used to solve The Knapsack Problem over the years, and see how this knowledge might change our perspective on complex problem solving, and how those insights might benefit your business.
The Evolution of Methods
- How It Works: Examines every possible combination of items, one by one.
- Pros: Guarantees an exact solution.
- Cons: Impractical for large inputs due to exponential time complexity.
- How It Works: Systematically explores all combinations by using function calls to include or exclude each item - like a guided brute force.
- Pros: Guarantees an exact solution faster than with brute force.
- Cons: Still impractical for large inputs due to exponential time complexity.
- How It Works: Stores solutions to subproblems in a table, avoiding redundant calculations.
- Pros: Faster way to an exact solution.
- Cons: Memory-intensive for large instances.
- How It Works: Sorts items by value-to-weight ratio and selects the highest ratios first.
- Pros: Fast and optimal for the Fractional Knapsack Problem (items can be split).
- Cons: Not suitable if items cannot be split.
- How It Works: Builds a solution tree, calculates the upper bounds of each branch, then compares only the branches with potential to improve the best working configuration, thus pruning inferior branches.
- Pros: Faster than brute force as pruning means it does not have to compare every possible combination.
- Cons: Works well for large instances, but still slow for very large instances.
- How It Works: Modeling methods like Genetic Algorithms and Simulated Annealing, the algorithm iteratively improves upon its candidate solutions.
- Pros: Can find good solutions for very large instances quickly.
- Cons: Does not guarantee to find the best solution.
- How It Works: Programs a virtual ‘agent’ and trains them on the problem by giving ‘rewards’ for decisions which prove to yield better results.
- Pros: Finds near-optimal solutions even for very large instances more quickly.
- Cons: Requires significant computational resources to train well enough to yield optimal results.
- How It Works: Builds neural networks to “learn” better ways to solve the problem via reinforcement learning (above).
- Pros: Can more quickly solve new instances after training, even for extremely large datasets. It’s the reinforcement learning approach on steroids.
- Cons: Requires even more significant computational resources to train.
- How It Works: Inspired by the behavior of slime moulds, it uses binary encoding to iteratively optimize solutions.
- Pros: Fastest even for extremely large datasets.
- Cons: Set-up and fine-tuning for a specific knapsack instance takes a long time, making it unsuitable for solving simple instances of the knapsack problem.
What a rich history of methods! But remember, the true prize is not some perfect and final solution. Instead, if we take a step back and consider what this journey teaches us, we’ll win some valuable insights about how to handle complexity.
Insights For Your Business
So what insights might we gain from exploring this history of solutions? Well, when facing complex problems, remember this:
- Accept Trade-offs
In all the above methods, despite a general trend for improvement over time, we always see trade-offs between accuracy, speed, resources, scalability, set-up time, etc. You can’t have it all, so decide what to sacrifice. - Know your problem
In reality, there is no one Knapsack Problem. Every instance is unique, with a different number of items, a different volume of knapsack, whether or not items can be split in half, and so on. It’s very easy to treat all instances of one complex problem as the same thing, but that intellectual laziness will cost you later. - Know yourself
If you require absolute accuracy, then dynamic programming suits you. If you need to preserve memory on your hard drive, you should switch to brute force. But if time is a priority, use the greedy algorithm. If you have the time and resources to train a machine, use a neural network. The best solution for you depends on your needs and abilities. - Stay tuned
History shows a pattern of new methods emerging over time, each one offering some improvement on its predecessors. It would be smart for us to keep updated on any emerging technologies that might help us outperform our competitors, especially now with the advent of AI.
Struggling with something?
Send our team of HubSpot engineers a message or book a free connect call. We love making your tools work better.
Everyone's Happier When Tools Work Better
Thanks!
Obsessive about optimization, vBase Digital believes that humanity's happier when tools work better. If you want to work smarter and not harder with HubSpot, contact us.
Was this article useful? Let us know!