Gene Pools
Life is all about playing the odds in a game where the rules keep changing.
Last updated
Life is all about playing the odds in a game where the rules keep changing.
Last updated
Once you simplify the problem at hand, you face a new challenge: how and when to adapt to an uncertain future. You may get to the heart of the problem and realize that you can't predict what the solution is because of an overwhelming number of variables in the mix. That's the normal state of affairs when solving complex problems, and you therefore need to once again embrace a more fluid set of strategies to improve your odds of success.
Natural systems have founds ways to deal with this uncertainty for billions of years. Through countless generations, life has found ways to survive and reproduce in every environment on the planet. How is this possible when there's no intention or design behind evolutionary processes? What is this secret sauce that nature has known all along, but that humans still struggle to understand? It all starts with the way genes are passed from one generation to the next, which is built out of three different pieces:
Heredity: Organisms pass their genes on to their offspring.
Variation: Genes randomly mix to create new variations of the species.
Selection: The fittest organisms survive and reproduce, while the rest die.
With these three simple mechanisms, gene pools generate adaptations in a wide range of environments. It's crazy to think about, but all the complex life you see around you is basically the accumulation of small, random changes across billions of years! As usual, nature has come up with a simple solution that humans have yet to fully reverse-engineer or copy on their own.
Back in the 1970s, John Holland (who you might remember from the Complex Adaptive Systems section), came up with a way to mimic some of these mechanisms with computers. His work was focused on how to use software to solve optimization problems when the solution was unknown, and he studied evolutionary processes in his search for answers.
What he came up with are called genetic algorithms, due to their similarity to the evolutionary method of passing down genes from one generation to the next. Although there's plenty of debate within evolutionary biology about the exact details, the overall process works as follows:
Create the Initial Population: Randomly generate a group of potential solutions to your problem. These solutions are similar the genetic code in DNA.
Evaluate Fitness: See how well each solution performs. You test each solution against your criteria for success. How close does each solution get to solving your problem? The closer it gets, the higher its 'fitness' score.
Select for Reproduction: Pick the best-performing solutions—the ones with the highest fitness scores—to be parents for the next generation. Sometimes, you might also include a few less-fit solutions to maintain genetic diversity.
Crossover (Mate): Create the next generation by combining aspects of the parent solutions. This is like biological reproduction—taking some genes from one parent and some from the other.
Mutation: To keep things spicy and avoid getting stuck in a rut, introduce random mutations. This means randomly tweaking the code of the offspring solutions. It’s a bit of a wildcard move to explore new possibilities and prevent the algorithm from settling on a local maximum.
Repeat the Process: Take this new generation and start again from step 3. Evaluate their fitness, select the best for reproduction, mate them to create a new generation, and introduce some mutations.
Termination: Finally, you need a stopping point. This could be after a certain number of generations, or when your solutions are good enough, or if you're not seeing much improvement between generations.
This process isn't an exact copy of how nature works, but it approximates it well enough to come up with a wide range of quality solutions. The core of its resilience lies in its relentless trial and error, a sort of genetic brainstorming.
When are genetic algorithms or gene pool-style approaches appropriate? To provide a lucid example, read what John Holland wrote about how genetic algorithms were used to design a jet engine:
A group of researchers at General Electric and Rensselaer Polytechnic Institute recently put a genetic algorithm to good use in the design of high-by-pass jet engine turbine such as those that power commercial airliners.
Such turbines, which consist of multiple stages of stationary and rotating blade rows enclosed in a roughly cylindrical duct, are at the center of engine-development projects that last five years or more an consume up to $2 billion.
The design of a turbine involves at least 100 variables, each of which can take on a different range of values. The resulting search space contains more than 10387 points.
The "fitness" of the turbine depends on how well it satisfies a series of 50 or so constraints, such as the smooth shape of its inner and outer walls or the pressure, velocity and turbulence of the flow at various points inside the cylinder.
Evaluating a single design requires running an engine simulation that takes about 30 seconds on a typical engineering workstation.
In one fairly typical case, an engineer working alone took about eight weeks to reach a satisfactory design. So-called expert systems, which use inference rules based on experience to predict the effects of a change of one or two variables, can help direct the designer in seeking out useful changes.
An engineer using such an expert system took less than a day to design an engine with twice the improvements of the eight-week manual design.
Such expert systems, however, soon get stuck at points where further improvements can be made only by changing many variables simultaneously. These dead ends occur because it is practically impossible to sort out all the effects associated with different multiple changes, let alone to specify the regions of the design space within which previous experience remains valid.
To get away from such a point, the designer must find new building blocks for a solution. Here is where the genetic algorithm comes into play. Seeding the algorithm with designs produced by the expert system, an engineer took only two days to find a design with three times the improvements of the manual version (and half again as many as using the expert system alone).
This example points up both a strength and a limitation of simple genetic algorithms: they are at their best when exploring complex landscapes to locate regions of enhanced opportunity. But if a partial solution can be improved further by making changes in a few variables, it is best to augment the genetic algorithm with other, more standard methods.
In other words, you should use genetic algorithms when there's potential for large numbers of variables to interact and create unpredictable changes in a system. In a jet turbine design, there are so many potential changes that can be made that will have an impact, it becomes very difficult to account for them ahead of time once you reach even a trivial number of variables.
The selection process inherent to genetic algorithms provides a way to deal with this uncertainty. By collecting "traits" into a gene pool, mixing them, then using a selection process to weed out the winners from the losers, you can stumble upon solutions that would have taken much longer to discover. Quite a few engineering optimization problems have been solved using this exact approach.
However, there's no such thing as the "perfect" algorithm for solving all problems of uncertainty, as Holland points out. There are situations when a traditional (or at least mixed) approach is more appropriate. But if you find yourself in a place where you're facing a combinatorial problem with many interacting variables (which is common in complex adaptive systems), try using a genetic algorithm approach.
When you deal with humanity's catalog of complex adaptive systems, you face a similar scenario. The future's no clearer than in the natural world, and traditional methods of planning and prediction aren't reliable. You can better ways forward through the creation of "gene pools" and the implementation of your own types of genetic algorithms.
For example, let's say you want to evaluate a group of potential collaborators. The old way involves reading resumes, doing some interviews, then picking whoever feels like the best pick and hoping for the best. But you can reframe this in genetic terms, with each person representing a different potential combination with your mind and the minds of all your other current collaborators.
First, gather a diverse set of candidates, your "initial population." You're not just looking at their resumes, but also at their potential—their ideas, their problem-solving skills, and their ability to adapt. This is similar to creating that initial gene pool for an organism within an ecosystem, full of varied traits and possibilities.
Next, evaluate their "fitness." This isn't just about how well they match the job description. It's about seeing how they handle real-world problems. Give them a project or a problem-solving exercise relevant to your business. You're looking to see how they think, how they work with others, and how they tackle challenges. Those who perform well, who show they can adapt and solve problems, have high "fitness" scores.
Then comes the selection for collaboration. You pick the individuals who showed the most promise during your evaluation phase. Just like in nature, you're not necessarily looking for the 'strongest' in the traditional sense, but for those who adapt best to the environment you put in front of them.
The "crossover" or collaboration phase is where things get interesting. You bring these selected individuals together, mixing their skills, experiences, and perspectives. This is where the magic of diversity really plays out. A mix of different traits tends to lead to better long-term outcomes for the group.
Mutation is essential as well. In a business context, this means you want your team to be able to come up with and explore new ideas, even if they seem a bit out there. This is how you avoid getting stuck in the same old ways of doing things. They should be able to inject randomness, generate a perturbation, and push you out of attractor basins on a regular basis.
The process repeats as you refine your team and methods. Over time, you keep evaluating, selecting, and mixing, always looking for that sweet spot of adaptability and innovation.
Finally, decide when to terminate the process. Maybe it's when you've hit a certain goal, or when the team is functioning smoothly and effectively, or when you feel you've got the best mix of people you can have. You may even decide to keep this process going forever, maintaining a highly diverse pool of talent that swaps in and out as demands change.
VCs have used a process akin to a genetic algorithm for decades. Since investments, especially investments in cutting-edge technology, have unknown return profiles, there's a strong incentive to leverage gene pool dynamics.
You begin by scouting a broad range of startups across various sectors and stages. This is a gene pool in reality, but it's referred to as a portfolio to sound more sophisticated. It includes a mix of different companies, each a representative of different business "traits" within the pool.
Due diligence and valuation are the venture capital version of fitness. You probe around the company, do background checks on the founders, and evaluate the overall value of what they're doing. The final dollar amount assigned to the company is an unambiguous measure of fitness. This is an imperfect process in which many VCs mistake their Patagonia sweaters for marks of genius, but it is built on a meta-pool of winning strategies which have been selected from past experience.
If you've ever wondered why VCs seem so obsessed with Stanford Computer Science degrees, that's why: there have been enough successes that they have spotted a pattern. It means they often get stuck in those attractor basins and miss big wins, but it's a (somewhat) rational pathway. Don't be that investor—there are plenty of other, more important signals to focus on.
After this fitness evaluation, you select the startups that present the best scores. An average VC firm sees hundreds, if not thousands, of pitches per year. Only a few make it through the filter. This isn't necessarily about picking the ones with the highest potential return, but rather those that balance risk with potential and align with your strategy. This could include a mix of safer bets in established markets and high-risk, high-reward investments in emerging fields.
Crossover involves blending aspects of your successful investment strategies. Perhaps you've had success with early-stage biotech firms and later-stage tech startups. In the next round of investments, they might look for opportunities that combine elements of both—like a health-tech startup at an early growth stage.
Introducing mutations can take many forms: exploring a new industry, changing the stage at which you invest (e.g., seed stage vs. Series A), or varying the size of your investment. These mutations are vital for exploring new markets and adapting to shifts in the startup ecosystem.
For a VC, termination might be when you've fully allocated your fund, when you've hit your target number of investments, or when market conditions change significantly. No matter what, there will be a point where all your dry powder is deployed and it's time to focus exclusively on evaluating the fitness of what you've invested in.
By approaching venture capital investment like a genetic algorithm, you're not trying to predict the next big hit in a highly unpredictable environment. Instead, you're creating a diversified and adaptable investment portfolio that can evolve with the market.
The primary downside of the gene pool approach is that it demands you have the resources to experiment. A venture capitalist is in a position to play this game because they have millions (or billions) of dollars to spread. Meanwhile, a solo investor or entrepreneur may have no choice but to put all of their money into a single, high-risk venture.
You run into the same problems in recruitment processes when you're squeezed for resources. Companies like Google and OpenAI, for example, can spend quite a bit of time and money to screen people across vast gene pools. So what's a smaller player to do?
First of all, you can use this approach across many different system types. Anytime you have many potential choices and no clear path forward, you can leverage a genetic algorithm approach. Consider the process of building your social network: you spread yourself out across many different friends and groups, then select the ones that you get along with best. That may require some money, but mostly it's about the time and energy expended to evaluate and select the people you want around you.
No matter what kind of resources you have at your disposal, spend some time asking yourself how you can utilize gene pool-style dynamics to solve your problems. You may be surprised what you can accomplish with an experimental mindset and the willingness to see what happens.