Monday, September 22, 2008

Emergence of 'Super-Quant' Funds

'We’ll get to a point where technical progress will be so fast that unenhanced human intelligence will be unable to follow it.'

—Ray Kurzweil

Is it possible for mankind to (directly, or indirectly) create synthetic intelligences that supersede human intelligence?

Before answering that question consider the following:
  • Quantum Computers: What are they? Author of The Fabric of Reality, Dr. David Deutsch's abbreviation of quantum computers: 'Quantum computers have the potential to solve problems that would take a classical computer longer than the age of the universe'. From Deutsch's description, one can conclude that quantum computers are vastly more powerful than any classical computer that has ever existed. But what are they? Answer: They are computational devices that solve intractable problems by utilizing distinctively quantum mechanical properties (including entanglement and superposition) to represent and structure data. Their computational power gives them the ability to perform accurate simulations of any physical system of comparable complexity, i.e. they can perform exact predictions of how a simulated system will behave in nature e.g; mapping the exact mutation pattern of a drug-resistant virus, or, prediction of the behavior of cells in unnatural environments, like zero-gravity-zero-oxygen-environments, or, solving hard optimization problems like the traveling salesman problem. From early 2009 onwards, quantum computing solutions (both hardware and software) will be available commercially from companies like D-Wave (a spin out from the University of British Columbia with partners from a diverse range of disciplines including; physicists, chemists, electrical engineers, cryogenics experts, mathematicians, and computer scientists). Since 1999, D-Wave has been creating a processor that uses a computational model known as adiabatic quantum computing (AQS), to solve complex search and optimization problems. This processor will hit the marketplace place in 2009!
  • Continuation of Moore's Law on a non-silicon semiconductor substrate: In 1965, Intel co-founder Gordon Moore observed that the number of transistors that can be inexpensively placed on an integrated circuit exponentially doubles every two years. This observation--exponential compounding of technological power that's uncorrelated to the state of the economy--is what is known in the field of computing as Moore's Law. Moore's Law is mainly driven by constant transistor shrinkage every two years... But there's now a fundamental barrier to the continuation of Moore's Law in the future: the limits of optical lithography. Photo energy is used to engrave patterns of the circuits (on the silicon semi conductor material), and (According to Gordon Moore) "we're now approaching a point where the wavelengths are getting into a range where you can't build lenses anymore." This means that it is tougher to pack more memory onto current CMOS processors, and that there is a memory bottleneck. Fortunately, Zettacore created/is creating ultra-dense, low-power, lower-cost molecular memory technology that can be improved, using existing capital equipment, over multiple generations. Hence, this means that the acceleration of technological progress will be driven by a new hardware technology: molecular memory. This implies that Moore's-law-like progress will continue, albeit on a new substrate.
  • Emergence of hardware functional equivalents to the human brain: According to Ray Kurzweil's book titled The Singularity is Near, a hardware functional equivalent to all regions of the human brain, should be able to handle approximately ten quadrillion (1016) calculations per second (cps). Currently, the most powerful supercomputers in the world, perform approximately a hundred trillion (1014) calculations per second. Kurzweil says that 'several supercomputers with one quadrillion cps are already on the drawing board, with two Japanese efforts targeting ten quadrillion cps around the end of the decade'. Hence, in consonance with his calculations, it is reasonable say: 'computers that perform ten quadrillion (1016) cps should be readily available for around a thousand dollars by 2020.' Therefore, the notion of super-intelligent computers seems feasible, from a hardware-performance perspective.
  • Convergence of the Microcosm and Telecosm: In his book titled Telecosm, George Gilder observed that the law of the microcosm is 'potentially converging with the law of the telecosm'. The law of the microcosm ordains that the value and performance of a network rise in direct proportion to the square of the increase in the number and power of computers linked on it. Currently, computational power is increasing exponentially (in accordance with Moore's law) and world usage of the internet is growing rapidly (between December 31 2000 and now, world internet usage grew by 305.5%). Therefore, it is evident that there is a fusion between the microcosm and the telecosm. Hence implying (as Gilder says) that the 'world of computers and communications can ride an exponential rocket' of progression. Thus in the future, as Vernor Vinge suggested in his essay titled The Coming Technological Singularity: How to Survive in the Post-Human Era, 'large computer networks (and their associated users) may "wake-up" as a superhumanly intelligent entity'. 'Wake-up' in this context means to 'gain consciousness'.
Therefore, one can argue that a technological singularity is imminent. However, the question of when, is debatable. According to Ray Kurzweil's estimates, the singularity will occur within a quarter century. In his bestseller titled The Singularity Is Near, Kurzweil conjectures that by 2045, 'non-biological intelligence will match the range and subtlety of human intelligence'. He believes that non-biological intelligence will surpass human intelligence because of the 'continuing acceleration of information-based technologies, as well as the ability of machines to instantly share their knowledge', and also, because 'non-biological intelligence will have access to its own design, and will be able to improve itself in an increasing redesign cycle'. This all implies that trans-human intelligence will drive technological progress at an exponentially increasing pace; resulting in a profound, and disruptive, transformation of every facet of our lives

...Possible effects of a technological singularity

In the blog-post titled The Hedge-Fund Strategy of The Future, it was stated, and this is according to Michael Jensen's paper series on Agency Costs, that the agency-gap increases greatly in times of rapid technological (and political) transformation. What does this mean? Answer: As we move closer to the technological singularity, there will be increasing divergence of the interests of shareholders and management teams of corporations. This naturally implies that we could be seeing an increasing number of corporate failures in the near future i.e the fall of the behemoths, or a milder scenario; where companies survive, but continuously decline in terms of profitability and market-share. Is this a bad thing for society as a whole? It is hard to tell. The answer is dependent on a confluence of forces that are too complex to forecast/analyze accurately. However, it is safe to say that that the blue-chips of the future, are probably companies we've never heard of today. [Side-note: To prevent the agency-gap from widening in their firms, top management will have to fire/re-assign more middle-managers than is politically correct (as the skills and experiences of most middle-managers increasingly become redundant). It would be interesting to see how top-management will perform in this respect. This would be the ultimate test of their courage of conviction!]

The key role of information technology, according to Michael Jensen is "taking the specific knowledge previously scattered through a firm and making it into general knowledge usable by all." Therefore, as the pace of technological change increases, specific knowledge scattered throughout a firm will increasingly become general knowledge usable by any member of the firm. When framed within the context of hedge funds: as the pace of technological change increases, a hedge fund's proprietary trading strategies will increasingly become general knowledge usable by its entire staff body (smart janitors included). Hence, increasing the likelihood of exposure of proprietary trading strategies, and the risk of imitation (a secret that everyone knows isn't concealed--it is banal knowledge!). This is something that will cause trepidation among hedgies, and has the potential to culminate in full-scale future shock! [Side-note: How bad can it get? Well, picture in your mind, a Theodore Kaczynskiesque luddite-hedgie-type.]


...Emergence of 'Super Quant' Funds


Quantitative hedge-funds (quant funds) will be among the early adopters (in financial markets) of trans-human artificial intelligences. But firstly, what are quantitative hedge funds? Investopedia defines a quantitative fund (quant fund) as 'an investment fund that selects securities based on quantitative analysis. In such funds, the managers build computer-based models to determine whether or not an investment is attractive. In a pure "quant shop" the final decision to buy or sell is made by the model.' Quant funds are the most secretive of hedge funds; their investors have no clue about exactly how their money is invested (they invest based on faith!). This is why they are known as black-box funds; you can't see what is inside.

*****************

When viewed in abstraction, algorithms--used by quant funds to analyze market behavior and select stocks--are essentially market paradigms that produce market-hypotheses/trades; when they are run on computers. Algorithms can also be termed as computational equivalents of a human trader's cognitive abilities, knowledge, character and experiences. They (the algorithms) are also as diverse as individual human beings are different, and, are just as fallible as we are (after all, they are a human creation!).

Interestingly, quantitative funds first emerged in the financial industry during the 1970s (their algorithms usually ran on the Cray 1 - one of the fastest computers then), and became more popular as computational power increased over the decades. According to promoters of quant funds, their trading systems operate in a 'disciplined, non-emotional manner' (hyperbole!). Currently, and this is according to a study conducted by the Aite Group, approximately 38% of all equity trades (in the United States) are automatically executed by quantitative trading systems. By 2010, the trades executed by quant trading systems will sum to approximately 52% of all equities traded.

Therefore, in the near future, quant funds are certainly going to influence market events in a big way!

When the technological singularity occurs, trading systems used by elite quant funds--since they are the only ones who'll be best positioned to acquire the super-human technologies--will exceed human intelligence, and will be capable of; reasoning, continuous real time self-improvement (of algorithms), trading at ever increasing speeds, and operating without the aid of human beings. They will have all the combined strengths of the best traders in the world, and none of the traders' human weaknesses. Hence, they'll outperform human qualitative traders in markets.

This will be the harbinger of a market crisis of apocalyptic proportions; everything that can go wrong, will go wrong.

The passage below will put the preceding assertion into perspective:

'Biological species almost never survive encounters with superior competitors. Ten million years ago, South and North America were separated by a sunken Panama isthmus. South America, like Australia today, was populated by marsupial mammals, including pouched equivalents of rats, deers, and tigers. When the isthmus connecting North and South America rose, it took only a few thousand years for the northern placental species, with slightly more effective metabolisms and reproductive and nervous systems, to displace and eliminate almost all the southern marsupials.

In a completely free marketplace, superior robots would surely affect humans as North American placentals affected South American marsupials (and as humans have affected countless species).'

—Hans Moravec from his book titled Robot: Mere Machine to Transcendent Mind

Do the maths! :-)

P.S I'm not a luddite

...To be continued

Thursday, September 18, 2008

Imitation risk

...Europe During the Middle-Ages

"The whole world admits unhesitatingly; and there can be no doubt about this, that Gutenberg's invention is the incomparably greatest event in the history of the world”

—Mark Twain

During the Middle-Ages, Europe was made-up of a series of squabbling fiefdoms. The biggest single landowner was the Roman Catholic Church, and most of Europe's wealth was concentrated in the hands of the church. Together with the feudal aristocracy, the church monopolized knowledge and influenced all the Political, Economic, Social and Technological events that precipitated in not only Europe, but the whole world.

European society was hierarchically stratified: with the church and the monarchs at the upper-echelons, the aristocrats occupying the second-tier (in terms of influence), the merchant-class occupying the mid-strata and the peasants occupying the lowest-strata. However towards the end of the Middle-Ages, the rigid class boundaries and the church's monopoly on power collapsed because of one thing: Johannes Gutenberg's movable printing press.

It helped to break the church's monopoly over access to knowledge, and, empowered the masses with knowledge that undermined, albeit indirectly, the status quo.

...Media Communications Technology

From the historical account above, one can see that communications media technology has the potential to improve the welfare of the masses: an upside. But it also has a downside: the potential to harm.

Technology is in itself neutral, it is neither negative nor positive. It is like a surgical knife that is a life-saving tool in the hands of a seasoned surgeon; in the hands of a psychopath, the same life-saving knife becomes a life-threatening tool. By the same token, communications media technology is in itself neutral; neither negative nor positive. Its usage is shaped by prominent social values and 'in-season' societal trends.

Currently, society has an insatiable hunger for information on hedge funds; people are conscious of the enormous fortunes being made by alpha-hedgies (e.g Paulson, Cohen, Griffin, Soros, Simmons e.t.c), and naturally, the general populace (and beta-hedgies) want(s) to know exactly how those fortunes are made. This leads them on a search for 'position-level' details on the activities of alpha-hedgies; which is where communications media technology comes in.

Communications media technologies are integrated into every facet of our daily lives. They are also interconnected amongst themselves. Which means that every facet of our lives is potentially, within the reach of the entire spectrum of communications technologies. To put this into perspective: a confidential email can, within a matter of seconds, find its way onto a(n) blog/online forum; readers of that forum/blog can quote 'interesting bits' of the email and send them off to their buddies via email/instant messenger/text message e.t.c, and, their buddies will in turn send the quotes to their friends and so on. Within a matter of minutes, references to the 'confidential email', would have circumnavigated the globe!

Has this happened within the context of hedge funds? Yes! The obvious example is of Daniel Loeb's emails, although they had nothing to do with position-level data. One recent example of a widespread leak of confidential position-level information, concerns the email exchange between Hohn and Degorce of The Children's Investment Fund. However, this leak didn't have any adverse effects.

Side Note/Food for thought: Have you wondered why Petroleo Brasileiro SA is the most popular stock (according to Goldman Sachs VIP list of 50 common stocks in hedge-fund holdings) among hedgies? (Hint: Soros Fund Management has an $811million stake in PetroBras, as of July 2008).

...Why should the average person be concerned about the exposure of proprietary hedge-fund strategies?

Let us postulate the Fingale's-Law-Scenario, where anything that can go wrong, goes wrong: In this scenario, position-level data seeps from an alpha hedge-fund to other market participants, including beta-funds that have an operational focus that is similar to alpha-fund's investment focus. When the beta-funds receive this information, they replicate the alpha-fund's proprietary trading strategies and build a portfolio that (structurally) resembles the alpha fund's portfolio. This pattern of imitation results in the formation of a super-portfolio, that is administered by fund managers with a diverse range of skill sets; who are also subject to varying investment circumstances.

The consequential super-structure, made-up of large overlapping portfolios, will be fragile and prone to collapsing when a minor perturbation occurs. In business cycles there are upturns and downturns, and when downturns occur, the 'super-portfolio' unravels and takes the hedge funds (both alpha and beta funds) down with it. When this happens, the globalized world economy goes into distress! In this cataclysmic scenario, return-to-risk ratios of securities held by the funds hover around negative infinity: the securities begin to have asymmetrical relationships between risk and return.

The 1998 debacle of Longterm Capital Management is the closest empirical parallel to the aforementioned Lingale's-Law-Scenario. According to Donald MacKenzie's research paper titled Risk, Financial Crises, and Globalization: Longterm Capital Management (LTCM) and the Sociology of Arbitrage, LTCM experienced outstanding success practising convergence arbitrage in the following markets; US government bonds, bond derivatives, mortgage backed securities (CDOs and CLOs), international stock markets and the equity derivatives markets. When the success of LTCM became well-known, other market-players that operated within LTCM's investment space, started imitating LTCM's investment strategies.

This worked in LTCM's favor during the initial stages of imitation; as pay-off periods of arbitrage positions, that would normally take longer periods of time to become profitable, became accelerated. However, this all changed on Monday the 17th of August 1998, when the Russian government defaulted a scheduled interest payment (coupon) on Ruble-denominated debt, and, devalued the Ruble: a black swan event. This triggered a chain of losses in the Ruble-denominated bond market, where LTCM and its 'replicas' held positions, and, market players began dumping Ruble-denominated bonds to mitigate losses.

When yield-curves inverted and unanticipated losses occurred, value-at-risk models (that marked-to-market) told LTCM and its 'replicas' to enhance liquidity and limit exposure. This triggered a selling spree of securities, including shares, non-bond derivatives e.t.c, held by LTCM and its 'replicas': there was a fire-sale. Eventually, markets were full of buyers and not enough sellers, and securities became unsellable.

Capital was eroded, hedge fund investors withdrew their assets, and, a crisis that seemed minor, now threatened the entire global financial system. Everyone was in panic! What initially appeared to be an idiosyncratic risk factor, now become systematic!

And so on, and so on, etc, etc

The example above illustrates why hedge-funds should be concerned about imitation risk, especially in this era of heightened societal interest in hedge fund strategies; where new communication media technology aids rapid dissemination of proprietary strategies (if they happen to leak out).