The Top 20 Applications for an...



S
ince the dawn of time, man has yearned to extract more CPU cycles from the calculating machine we call the 'computer'. And no wonder either; faster speeds equate to more fun, greater productivity, and faster scientific progress. In a way, we've been spoilt with our multi GHz CPUs. Compared to 20 years ago, most tasks now are performed in the blink of an eye.


But on the other hand, the reverse could be true. Maybe we subconsciously expect infinite levels of speed, and detest the limits we have to endure. Perhaps we loathe optimizing and pandering to the whims of the CPU only to find that the running speed still isn't up to par. For sure, there are many problems and tasks which require exponentially faster speeds than anything we have today. And even our best algorithms don't come close (or can't ever come close) to solving them.

"You can raise the argument that intractability is relative. You can boldy thrust forward Moore's Law - like a child that's made a macaroni bird in art class - but if you do, you're not getting it. Intractable is bigger than Moore's Law. Intractable is like, thermodynamics big." - Johnath
Nice quote, but did he bargain for an infinitely fast CPU?
And that's what this article will address. The potential that lies in an infinitely fast computer. Not just one that is super humungously fast (tm), but one which pops out the answer in O(0) each time, every time. Finally, we'd be able to beat a 9th dan Go player, and words such as "intractable" or "combinatorial explosion" would lose their meaning. Of course, truly infinite speed is as much a pipe dream as infinite free energy, and it's true that progress in raw GHz has ground to a halt over the past few years. But we always have the promise of multi-core and even quantum computing to help quench our thirst. While truly infinite speed may be forever beyond our reach, these technologies may eventually help materialize at least some of what's below.

And hence we present the top 20 (well 17) applications that would benefit most from infinite speed (or close enough), starting with the least desirable, and finishing with the dream application. Zero latency and infinite memory is assumed.



Page 1

17: Fractal exploration and the 3D Mandelbrot
16: More responsive computers
15: Cutting stock and packing
14: Composing music
13: Alien hunting

Page 2

12: Vehicle routing problem
11: Protein folding simulation
10: Unification of custom chips
9: Weather forecasting
8: Graphics (creating, rendering, modelling)
7: Music/sound analysis

Page 3

6: Models for a universal theory of nature
5: Graphics (end user)
4: Rapid software devlopment
3: Physics and particles (entertainment)
2: Artificial Intelligence
1: Physics and particles (science)


This excellent 3D Mandelbrot-shaped picture (rendered by Thomas Ludwig) looks like the real deal, although inspection will show it's not quite what we're looking for.

17: Fractal exploration and the 3D Mandelbrot


Okay, maybe I'm biased here, but I had to tag this one in. Fractals can be awesome creatures, but realtime exploration isn't possible due to the calculations needed for deep zooming, decent antialiasing (up to 32x32 per pixel oversampling needed for maximum quality!), and for more complicated fractals including raytracing of 3D fractals.

Furthermore, with CPU speed limits a thing of the past, we can hunt for the "Holy Grail" of fractals - the real 3D Mandelbrot. We covered this curious beast in an earlier article, and theorized that it would look like the most awesome fractal ever. If it existed.


The idea would be to brute force though trillions of different formulas, render them all, and see which resulting 3D picture comes closest to the mathematical translation of a: "3D cardioid style apple core surrounded by spheres", or even just "smaller spheres touching other spheres, and yet smaller ones tangent to those". More compact, elegant formulas would have preference over drawn out arbitrary formulas which may produce trivial/fake versions of the aforementioned design.

Using infinite CPU power is an odd way to solve such an intriguing problem, but frankly I don't care how it gets solved as long as I can glimpse the 3D mandelbrot for even one second.



[source: Gnome 3D Tetris]
Playing 3D tetris with cubes is one thing, but when you switch to irregularly sized cuboids and other shapes, things start to get more tricky...

16: More responsive computers


A very simple and obvious use, but one that would prove very welcome. The GUI of the OS would become much more responsive with no apparent lagging or freezing. Yes, even in Windows Vista potentially.

15: Cutting stock and packing


There are a wealth of industries that rely on packing something into a space the most efficient way possible. Likewise, cutting material to minimize waste is a tricky problem, at least in the 2 dimensional version of the problem.

Actually, it's probably the NP-Complete 3D packing problem which would benefit most. As yet, there's no polynomial algorithm found which would help here. And evidence hints there won't ever be one. Infinitely fast CPUs eat these kind of problems for breakfast however.




Renoise uses a way of composing quite unlike software such as Reason, Sibelius or Cubase. Music is entered via a grid-like array, rather than a piano roll. It's tricky to use at first, but in my opinion at least, once you get used to it, it's ultimately more efficient.

14: Composing music


Unlimited amounts of speed would be a boon for composing, especially when today's VSTs (effectively software synthesizers) can gobble 20% or more of the CPU per channel. Programmers needn't worry about efficiency, and can concentrate on simplicity in their VSTs. Multiple effects such as echo, reverb, phase, or EQ can be set for each track/channel, again without having to worry about annoying hiccups in the playback. Coding kludgy workarounds such as 'freezing' will be a thing of the past, as will latency/timing issues.

On the sound processing front, it's still difficult to generate perfectly simulated reverb or time/pitch stretching on the fly. Any other number of effects, particularly those involving countless 'granules' of sound would become possible, unleashing new musical possibilities.





Source: Le Voyage dans la lune (A Trip to the Moon) (1902). Although we may not find intelligent life on the moon, we should use the moon's farside to set up a SETI base. This will help avoid the atmosphere and EM noise that earth telescopes have to endure.

13: Alien hunting


SETI could use the speed to analyse galaxies for possible signs of life.

Analysing the EM signals our telescopes receive is not an easy task. Amongst the noise, SETI has to pick out particularly dominant frequencies. That sounds reasonable until you consider that there's mountains of sky, multiple Libraries of Congress worth of frequencies, and that any communication is likely to be pulsed if the alien life is intelligent. There's also the difficulty of doppler shifting, where any possible frequency slides up or down slightly.

Quote from Sciencemag.org:
    "To find the needle in a galaxy-size haystack, SETI workers are counting on the consistently exponential growth of computing power to continue for another couple of decades."
That quote was made in 2005, but it would seem to apply today. One would assume that they are building up data, without analysing some of it, or analysing it fully.



12: Vehicle routing problem


The "Travelling Salesman" is a classic problem devised in 1930 and has since been the subject of much study. Despite its intrinsic O(n!) complexity, it's got to a point where some complicated heuristics can solve for millions of cities within an accuracy of 2-3% (perfect accuracy will still require O(2^n) at best).

But real life always tends to throw a spanner in the works. There will often be added complications such as travel costs and capacity, window restrictions, and different start/end locations for vehicles. All of these can be classed under the more general category of the Vehicle Routing Problem.

Good progress has been made in this regard, using heuristics such as Tabu search (ref). But until exponentially faster CPUs pop into existence, we will continue to spend time tweaking parameters for special case application algorithms, and still obtain results that won't quite be optimal.





Created by the amazing artist - Naohisa Inoue. Apparently, they sell infinitely fast computers the size of bubblegum for the price of peanuts in the market store shown here. That would help in folding proteins efficiently and relatively quickly.

11: Protein structure and folding prediction/simulation


The simulation of a single protein fold currently takes computer years to do what nature does in microseconds (around 30 to 100 trillion times faster apparently). Folding@Home currently utilizes 100,000 processors working in parallel, and that's a massive improvement, but we're still down to nearly 1 billion times slower than realtime.

The fast simulation of protein folding would help us to understand and find a cure for Alzheimer's, AIDS, and cancer much more easily. At least in theory. It would seem that the benefit of infinite CPU speed will not give us an instant 'magic cure' for anything, but the results will rather help steer us in the right direction.



10: Unification of custom chips



Courtesy of Intel, their upcoming Larrabee CPU/GPU, and not for example, the schematic of the hoverboard from BTTF.
With the CPU running at chronic speeds, we can forget about graphics cards, co-processors, custom chips, and anything of that sort. This not only unifies the architecture, but no doubt saves countless millions of man-hours needed to research CPU design and co-processors in the first place. The CPU itself would be reduced to its simplest design, and any sound or graphics output can be sent directly from the memory/CPU to the audio/visual device.

To some extent, the CPU and graphics card is already converging, as they both aim to dig into each other's traditional territory. Graphics cards can already be used for general purpose computing, and as CPUs increase their number of cores, no doubt they will start to encroach on the graphics card functions (e.g. via raytracing). Intel's new Larrabee processor seems to set a precedent by combining many advantages of both the CPU and GPU, though only time will tell whether it will find serve either purpose very well.

It's weird this one, because we theorized about the idea 16 years ago in an old fanzine article on the 'future of computers'. Read it for a laugh and for some strangely accurate predictions which may yet come to pass.





[Source: US Air Force]). Weather will be as unpredictable as the stunning Aurora Borealis after around two weeks.

9: Weather forecasting


Currently, we can predict the weather well around 5-8 days ahead. The theoretical limit is around two weeks. After that point, chaos theory takes over, and it's anyone's guess whether it will rain or shine.

Of course, predicting the weather more accurately will help farmers know when to plant/harvest crops, construction companies when to build, shipping/transportation companies what routes to take, and can help us to forecast very dangerous weather (saving property and lives). And it would seem that computational power is the most limiting factor in our ability to do so (apart from for dangerous weather maybe where observational data is limited over the oceans). I quote from here:
    In short, computing power [rather than observational data] is the limiting factor when it comes to extending the range and accuracy of weather forecasts. Therefore, the future of computers will largely determine the future of forecasting.
Infinite speed may be overkill for this problem, but it seems that at least a dozen orders of magnitude more CPU power will be required to reach the limit of weather forecasting.


8: Graphics (creating, rendering, modelling)



Source: Grzegorz Tanski. In the future, all games and 3D software will use full global illumination. Unless the scene objects are static, I wouln't hold your breath however, because CPU time goes through the roof...
The obvious application for a mega-fast CPU would be for 3D modelling software. Fully ray-traced images with global illumination can replace wireframe shortcuts. True WYSIWYG - see the world in realtime as if it were the final render.

Even 2D programs such as Photoshop would enjoy the speed up as working on multiple layers with complex gradients and textures would be a breeze.

Here's two more grand ideas:

Convergence of vector and bitmap editing

    For 2D graphics, there would be a nice convergence of vector editing (structured drawing) and bitmap editing (raster/pixel-style painting). Vector editing has always had the advantage of keeping track of the points and mathematical definitions for every shape on the screen. But in a number of regards, it has always fallen short of replacing bitmap editing.

    A perfect example is the smudge or blur tool. Two significant problems arise - namely the representation of the blurred area, and the CPU speed. The former problem can be at least partly solved by representing the blurred area as a new pseudo object (as if the smudge/blur has been freshly drawn in by the user each time the picture is edited or refreshed).

    Another good example is detail level. Drawings comprised of vectors can only contain so much detail before the PC is choked to death by millions of points. Drawing then becomes cumbersome and tedious to edit.

    With infinite speed however, both of the above problems can be overcome, and at last we can use a unified graphics editor that acts like a bitmap editor with effectively infinite resolution, and the ability to re-edit previously drawn shapes like vector editors can.

True 3D Voxel Editing and beyond


Courtesy of Sevens Heaven. See more of his creations here. Appearances can be deceiving - this is what you really see if you get inside the arcade monitor and view from the side.
    What Photoshop did for 2D graphics, a hypothetical program would do for 3D. Instead of painting on a flat 2D canvas, structures would be 'sculpted' using billions of 'voxels', which is a tiny cube equivalent of the square pixel. There's nothing really like it out there, and no wonder why either, as such a program would kill memory and CPU in one shot.

    Oh that's not to say some haven't tried. The closest realization of the idea would probably be something called 'ZBrush'. It's a very curious program which can produce stunning results. However, it doesn't use true voxels, but instead uses pixels with a particular depth value. That means surfaces are usually one sided so you can't view them from behind (or place one 'voxel' behind another for that matter - it's still a 2D array after all).

    Of course, a step above even true voxel painting would be to incorporate the versatility of re-editable 3D vectors with the flexibility of painting voxels (a 3D equivalent of the unification of vector and bitmap editing as mentioned in the previous section). Along with a decent 3D mouse interface, creativity would be completely unbounded. I don't expect anything like it in my lifetime, that's for sure.




Converting from MIDI to MP3/WAV is essentially trivial. You just record what comes out. But doing the reverse - cnverting from WAV to MIDI taxes the strongest AI algorithms, and is beyond our level of science for now.

7: Music/sound analysis


In general, music information retrieval is very useful to automatically classify, index, search, and analyse music. Possibilities include translating MP3 to score/MIDI (which is still very tricky), and individual instrument extraction to use in a new composition. Signal analysis is also useful for speech and voice recognition of course. But what else could a zippier CPU do for us here?

Well, there are services out there that attempt to look for similar music to your favourites based on various attributes. But that requires a lot of CPU time, as exhaustive pair-wise comparison of large music databases are required. Yes, techniques such as locality sensitive hashing can be used to reduce high dimensional data to a more compact form, but these can be difficult to implement or maintain, and are generally a kludge which won't necessarily be as accurate as sheer brute force.

One of the fundamental techniques used before analysing a sound is to first split the signal into a frequency spectrum. This is usually done with STFT/FFT techniques, but with an infinitely fast CPU, one can analyze all possible sets of frequencies, amplitudes, and offsets of individual sine waves, mix them, and see which combination produces a result closest to the given signal window. Some signals/sounds may require one or two sine waves to come close, whilst others may require hundreds or even thousands of mixed sine waves (each with their own amplitudes, phase and frequency) to come close. It would be computationally prohibitive, but apart from being simpler, there's also the chance that brute forcing like may at least partially overcome the 'uncertainty principle' where's there's a compromise between frequency and time resolution.



6: Models for a universal theory of nature



Courtesy of NASA. Unravelling the secrets of dark matter/energy may be the missing key we need to form a grand unified theory of everything. Alternatively, they may help us to achieve tastier ice cream.
Although one of the problems in finding a "Grand Theory of Everything" is the lack of data (i.e. we don't know certain things that happen at subatomic scales), the other large obstacle is in finding the perfect set of equations to match all known data. Distributed projects such as Cosmology@Home may find some answers, but using a Brute Force search along with an infinitely fast computer bypasses the second problem by enumerating all possible theories, and filtering them down to the most elegant (shortest?) ones. It's not guaranteed we'd find the universal theory, since a never-ending series of approximations may be required (each converging to the 'truth' but never quite reaching it). But if nothing else, one can see how many possible competing theories could exist!

Finding something resembling a Theory Of Everything would probably allow many advances in engineering just as the discovery of relativity led to better materials, fission and GPS, or how the discovery of quantum mechanics led to the laser and microchip. We could then see the limits to space travel and know for sure if faster-than-light travel (through wormholes etc.) is attainable. The big question about exactly how the universe began (and even what came before that) can be answered once and for all.



5 : Graphics (end user)


Created by Gilles Tran. It'll be a while before we start seeing this kind of quality in video games.

Photorealistic (or heavily detailed surreal/fantasy/psychedelic) imagery would become the norm. Games would of course look more glorious. The rendering equation can be solved perfectly, so developers can go overboard with global illumination, caustics, sub pixel sampling, reflections, refractions and atmospheric effects, all with limitless levels of recursion. The creative process would never be hampered by how many polygons or B-Splines were allowed. All video can be made super-smooth too (500 frames per second - approaching the limit of perception).

For example, Toy Story 2 took from 2 to 20 hours per frame or five hours to render each frame on average. To render in real-time for a video game (say 60 FPS), you would need a processor that was just over 1,000,000 times faster than what we have today). And that's mostly using Reyes rendering (which incorporates mostly rasterization tecniques with only minimal ray tracing).

Actually, maybe we won't have to wait long for some of this extravagance. Technology is slowly beginning to produce ray-traced graphics in realtime. Next stop, path-tracing please.




Created by Patrick J. Lynch (CC-BY 2.5).
Less brain power will be required when coding in ultra-high level languages, and hence we can get away with writing more sloppy readable code. As a side effect, we can emulate the brain itself, and get that to do the sloppy tedious code writing for us.

4: Rapid software devlopment


In programming, there's often a balance to strike between readability/maintainability/modularity and the speed of code. That would all change though with an infinitely fast processor. All code would be written with the former in mind with little or no regard to efficiency. There would be no need, and so software would be much quicker and cheaper to develop. Lower level languages such as assembler, C or even Java can go for walkies. Instead, something more BASIC, Ruby, or Python-like would be the future. Or maybe something more declarative such as Prolog should be used, where one defines the outcome rather than the steps needed to achieve the outcome.

One example of simpler algorithm development would be sorting. Suddenly, Bubble Sort would start to make sense. Actually scrap that, Bozo sort, one of the archetypes of bad sorting, could now be the one to go for.

In addition, we can stop wasting our time improving the efficiency of previously slow algorithms. For example, the Barnes-Hut simulation algorithm can be thrown out in favour of brute force N-body simulation. All techniques which provide analytical solutions can now be evaluated numerically through sheer Brute Force, and we can finally lift the curse of dimensionality.

As a bonus, we can skip the fierce scientific debate about whether developing Metaheuristics are a waste of time *. ;)

In terms of programming animation/video, we can forget pixels and frames per second completely, and instead think in terms of time and screen proportions.

* <Begin Controversial Statement> (Alternatively, one could compare random or Brute force search with the success of say... genetic algorithms, and solve the debate that way - free lunches are best eaten hot) <End of Controversial Statement>.



3: Physics and particles (for entertainment purposes)



From the World Of Goo game.
Special effects in films would become ever more spectacular. But let's concentrate on games for this section. Physics based gameplay won't be restricted to pinball or sports. In fact, we can go beyond static polygons, and build up our world from trillions of individual atoms to allow for realistic simulation of effects such as water, explosions and air flow. Indeed, games such as World of Goo, Little Big Planet, Hydrophobia or Crysis with its 3000 barrel explosion reflect some of the changes taking place in this scene. However, you can bet that last one is not running in realtime (about 250x speed as of the time of writing is needed for that!).

With this sort of game engine, expect hyper realistic and interactive world effects such as liquids, bridges, explosions, weather, breakable surroundings, but also many strange and novel visual scenes and gameplay styles such as the manipulation and interaction of semi-liquid jelly-like objects, monopole magnets, unusual explosion effects, reverse black holes, matter conversion, and other such madness. Games can feature bizarre stories such as a battle between Blue Goo to prevent Grey Goo from eating everything in sight, and be realistic if need be. We can redefine the laws of physics itself to our whim.

Finally, using atoms and molecules as a basis for virtual reality and games allows calculation of realistic sounds (instead of prerecorded). Only recently has there been an attempt to model realistic sounds such as a dripping water tap. The model isn't perfect due to the complexity of the problem. You can imagine how complicated the acoustics of an ocean may be in comparison...





[source: Matthias Süßen (CC-BY-SA 2.5)]
There's a nice scenic picture, with a graceful mist in the distance.

Except it's not fog at all, but rather airbourne Grey Goo, consuming everything in its path! If AI ever reaches its most advanced stage, there's a tiny chance the Grey Goo will eat the world. But that's not going to happen really is it?

2: Artificial Intelligence


We're in more speculative territory now, but according to Ray Kurzweil, computers should start to match the speed of the human brain by around 2030 (around 10,000 trillion calculations per second). At that point, we may be able to let humanoids do our housework, and at some point after that, even attain the singularity itself.

It's possible all this may happen of course, but the computer's inability to understand aesthetics, or even what even makes a good piece of music, may prevent using this bombshell to automatically and easily create a future paradise, nevermind cure the human condition of unhappiness generally. For the time being, we'll have to make to do with simulating a rat's neocortical column.

Because of the speculative nature, AI just missed out on reaching the top spot which goes to...


1: Physics and particles (for scientific/engineering purposes)



[Source: Universesandbox.com] What happens when two galaxies collide?
Being able to simulate the universe is an ambitious task even for supercomputers. We need to simplify galaxies, and particularly the structure of matter to get even close.
And here it is, the big numero uno application for infinite speed. At the moment, we use lots of short cuts to model the universe around us. How would that change if speed was no obstacle? Well for starters, we can forget the fast but rough approximations of continuum mechanics (fluid/solid mechanics). Yes, even forego the lesser generalizations from statistical mechanics, and instead, go straight for a purely numerical/computational solution: Brute force molecular dynamics would let us simulate all particles interacting with each other. But hang on, since we've got CPU cycles coming out of our ears, why stop there? Solving problems involving the motion of fluids with quantum dynamics might seem a bit like overkill, but if we're in this for the long haul...
    [source]:
    "Quantum theory in principle allows us to predict the structure and reactivity of all molecules, but the equations of Quantum Theory become intractably complex with increasing system size. Exact analytical solutions are only possible for the smallest systems and for almost all molecules of interest in chemistry and life sciences no such solutions are known to us."
One useful application would be in the field of aerospace, This excerpt is taken from Frontiers of Supercomputing II, Chapter 9 - INDUSTRIAL SUPERCOMPUTING, (Kenneth W. Neves):
    Why Use Supercomputing at All? (p335)
    [...]. One creates a geometric description of a wing, for example, and then analyzes the flow over the wing. We know that today supercomputers cannot handle this problem in its full complexity of geometry and physics. We use simplifications in the model and solve approximations as best we can. [...]. Smaller problems can be run on workstations, but "new insights" can only be achieved with increased computing power.
Until then, we'll need short cuts such as kludgy heuristics or using analog machines to measure the casimir force, a problem which is too complicated for digital computers as of yet.

One of the most powerful ideas would be to use genetic programming (or rather, a simpler Brute Force search) to find solutions to general-purpose problems. Assuming a complete understanding of the 'theory of everything', the only remaining challenge which computers can't really do, is to define the scoring mechanism (or fitness function as it's known in the AI world).

Nanotechnology would get a boost too, as masses of computing power is needed to construct nanotech equivalents of normal size mechanisms such as bolts, screws, valves, wheels, hinges and more complex machinery.

Taken from: Frontiers of Supercomputing II - Chapter 8 - THE FUTURE COMPUTING ENVIRONMENT - Molecular Nanotechnology (Ralph Merkle)
    In the same way, we can model all the components of an assembler using everything from computational-chemistry software to mechanical-engineering software to system-level simulators. This will take an immense amount of computer power, but it will shave many years off the development schedule.




Offsite Links:

A nice thread asking the very same question
The Infinity Machine - A more technical slant, discussing the mechanics of how a theoretically infinitely fast computer may work.
http://arxiv.org/pdf/math/0212047" - A reseach paper, expanding the concept of the Turing Machine to include infinitely fast processing.

Keywords for further research

  • currently intractable
  • quantum computation applications
  • http://en.wikipedia.org/wiki/Grand_Challenge
  • brute force
  • NP, NP-Hard, NP-Complete, EXPTIME, 2-EXPTIME
  • curse of dimensionality
  • combinatorial explosion
  • Solomonoff Induction