Tip of the Week #96                   Tip Index

Go to the Prior Tip Markets, Mobs & Mayhem: A Modern Look at the Madness of Crowds by Robert Menshel
Go to the Next Tip Portfolio Management: What is the Contribution to Shareholder Value?
Return to MaxValue Home Page

"Evolving Inventions"

by John Koza, Martin Keane and Matthew Streeter, Scientific American, Feb. 2003, pp. 52-59.

Perhaps the most intriguing area of computer automation deals with "genetic algorithms" (often called "genetic programming").  This is a technique where solutions are achieved through a process analogous to genetics in biology.  I have been acquainted with genetic algorithms through two optimization tools associated with Microsoft Excel add-ins for Monte Carlo simulation:

These tools provide a way to optimize decision variables in non-linear, stochastic systems, i.e., typical and complex project and corporate models.

"Evolving Inventions" provides an overview of this extraordinary technique.  The examples developed by the authors are mostly design problems in electrical engineering.

"Genetic programming has duplicated 15 previously patented inventions, including several that were hailed as seminal in their respective fields when they were first announced."  Because U.S. patents were awarded, the authors claim this as proof that a machine can perform design tasks and create original thinking.

How it works

In any optimization, we start with a description of what is to be optimized.  This provides one or several criteria by which solutions will be judged.  Constraints are often part of the problem description.

Genetic algorithms begin with a "primordial ooze" or randomly-generated trial "organisms." Each organism (trial solution) is described by its "DNA," such as a series of numerical values.  The "gene" can also include primitive operators, such as addition and multiplication (if evolving a formula) or design components (if designing an electronic circuit, for example).

The algorithm evolves the solution in analogy to sexual reproduction and natural selection.

Genetic algorithms (GA's) work with problems where the solution space is ill-behaved.  Simpler optimization techniques may get trapped in a local maximum.  Therefore, the program must search broadly for possible solutions.  Initial random organizations and random gene mutations enable a broad ("scatter") search for good solutions; the best ones gradually converge, and usually one best (or at least very good) solution emerges.

Good reading.


I've been experimenting with a GA for a stock investment application.  It feels a lot like multi-criteria regression --- though by brute force!  I'm running the GA to determine the best combination of values in a scoring function.  The objective is to realize a near optimal portfolio value at the planning horizon.  Back-testing on a scant three month's of data has been encouraging.  However, I fear that I'm seeing an "overlearning" effect that practitioners see with neural networks.  This is fun stuff!


—John Schuyler, February 2003.  Revised March 2003

Copyright 2003 by John R. Schuyler. All rights reserved. Permission to copy with reproduction of this notice.