S |
"You can raise the argument that intractability is relative. You can boldy thrust forward Moore's Law - like a child that's made a macaroni bird in art class - but if you do, you're not getting it. Intractable is bigger than Moore's Law. Intractable is like, thermodynamics big." - Johnath |
Nice quote, but did he bargain for an infinitely fast CPU? |
17: Fractal exploration and the 3D MandelbrotOkay, maybe I'm biased here, but I had to tag this one in. Fractals can be awesome creatures, but realtime exploration isn't possible due to the calculations needed for deep zooming, decent antialiasing (up to 32x32 per pixel oversampling needed for maximum quality!), and for more complicated fractals including raytracing of 3D fractals. Furthermore, with CPU speed limits a thing of the past, we can hunt for the "Holy Grail" of fractals - the real 3D Mandelbrot. We covered this curious beast in an earlier article, and theorized that it would look like the most awesome fractal ever. If it existed.
Using infinite CPU power is an odd way to solve such an intriguing problem, but frankly I don't care how it gets solved as long as I can glimpse the 3D mandelbrot for even one second. | ||
16: More responsive computersA very simple and obvious use, but one that would prove very welcome. The GUI of the OS would become much more responsive with no apparent lagging or freezing. Yes, even in Windows Vista potentially. 15: Cutting stock and packingThere are a wealth of industries that rely on packing something into a space the most efficient way possible. Likewise, cutting material to minimize waste is a tricky problem, at least in the 2 dimensional version of the problem. Actually, it's probably the NP-Complete 3D packing problem which would benefit most. As yet, there's no polynomial algorithm found which would help here. And evidence hints there won't ever be one. Infinitely fast CPUs eat these kind of problems for breakfast however. | |
14: Composing musicUnlimited amounts of speed would be a boon for composing, especially when today's VSTs (effectively software synthesizers) can gobble 20% or more of the CPU per channel. Programmers needn't worry about efficiency, and can concentrate on simplicity in their VSTs. Multiple effects such as echo, reverb, phase, or EQ can be set for each track/channel, again without having to worry about annoying hiccups in the playback. Coding kludgy workarounds such as 'freezing' will be a thing of the past, as will latency/timing issues. On the sound processing front, it's still difficult to generate perfectly simulated reverb or time/pitch stretching on the fly. Any other number of effects, particularly those involving countless 'granules' of sound would become possible, unleashing new musical possibilities. | |
13: Alien huntingSETI could use the speed to analyse galaxies for possible signs of life. Analysing the EM signals our telescopes receive is not an easy task. Amongst the noise, SETI has to pick out particularly dominant frequencies. That sounds reasonable until you consider that there's mountains of sky, multiple Libraries of Congress worth of frequencies, and that any communication is likely to be pulsed if the alien life is intelligent. There's also the difficulty of doppler shifting, where any possible frequency slides up or down slightly. Quote from Sciencemag.org:
| |
12: Vehicle routing problemThe "Travelling Salesman" is a classic problem devised in 1930 and has since been the subject of much study. Despite its intrinsic O(n!) complexity, it's got to a point where some complicated heuristics can solve for millions of cities within an accuracy of 2-3% (perfect accuracy will still require O(2^n) at best). But real life always tends to throw a spanner in the works. There will often be added complications such as travel costs and capacity, window restrictions, and different start/end locations for vehicles. All of these can be classed under the more general category of the Vehicle Routing Problem. Good progress has been made in this regard, using heuristics such as Tabu search (ref). But until exponentially faster CPUs pop into existence, we will continue to spend time tweaking parameters for special case application algorithms, and still obtain results that won't quite be optimal. |
11: Protein structure and folding prediction/simulationThe simulation of a single protein fold currently takes computer years to do what nature does in microseconds (around 30 to 100 trillion times faster apparently). Folding@Home currently utilizes 100,000 processors working in parallel, and that's a massive improvement, but we're still down to nearly 1 billion times slower than realtime. The fast simulation of protein folding would help us to understand and find a cure for Alzheimer's, AIDS, and cancer much more easily. At least in theory. It would seem that the benefit of infinite CPU speed will not give us an instant 'magic cure' for anything, but the results will rather help steer us in the right direction. | |
10: Unification of custom chips
To some extent, the CPU and graphics card is already converging, as they both aim to dig into each other's traditional territory. Graphics cards can already be used for general purpose computing, and as CPUs increase their number of cores, no doubt they will start to encroach on the graphics card functions (e.g. via raytracing). Intel's new Larrabee processor seems to set a precedent by combining many advantages of both the CPU and GPU, though only time will tell whether it will find serve either purpose very well. It's weird this one, because we theorized about the idea 16 years ago in an old fanzine article on the 'future of computers'. Read it for a laugh and for some strangely accurate predictions which may yet come to pass. | |
9: Weather forecastingCurrently, we can predict the weather well around 5-8 days ahead. The theoretical limit is around two weeks. After that point, chaos theory takes over, and it's anyone's guess whether it will rain or shine. Of course, predicting the weather more accurately will help farmers know when to plant/harvest crops, construction companies when to build, shipping/transportation companies what routes to take, and can help us to forecast very dangerous weather (saving property and lives). And it would seem that computational power is the most limiting factor in our ability to do so (apart from for dangerous weather maybe where observational data is limited over the oceans). I quote from here:
| |
8: Graphics (creating, rendering, modelling)
Even 2D programs such as Photoshop would enjoy the speed up as working on multiple layers with complex gradients and textures would be a breeze. Here's two more grand ideas: Convergence of vector and bitmap editing
A perfect example is the smudge or blur tool. Two significant problems arise - namely the representation of the blurred area, and the CPU speed. The former problem can be at least partly solved by representing the blurred area as a new pseudo object (as if the smudge/blur has been freshly drawn in by the user each time the picture is edited or refreshed). Another good example is detail level. Drawings comprised of vectors can only contain so much detail before the PC is choked to death by millions of points. Drawing then becomes cumbersome and tedious to edit. With infinite speed however, both of the above problems can be overcome, and at last we can use a unified graphics editor that acts like a bitmap editor with effectively infinite resolution, and the ability to re-edit previously drawn shapes like vector editors can. True 3D Voxel Editing and beyond
Oh that's not to say some haven't tried. The closest realization of the idea would probably be something called 'ZBrush'. It's a very curious program which can produce stunning results. However, it doesn't use true voxels, but instead uses pixels with a particular depth value. That means surfaces are usually one sided so you can't view them from behind (or place one 'voxel' behind another for that matter - it's still a 2D array after all). Of course, a step above even true voxel painting would be to incorporate the versatility of re-editable 3D vectors with the flexibility of painting voxels (a 3D equivalent of the unification of vector and bitmap editing as mentioned in the previous section). Along with a decent 3D mouse interface, creativity would be completely unbounded. I don't expect anything like it in my lifetime, that's for sure. | ||
7: Music/sound analysisIn general, music information retrieval is very useful to automatically classify, index, search, and analyse music. Possibilities include translating MP3 to score/MIDI (which is still very tricky), and individual instrument extraction to use in a new composition. Signal analysis is also useful for speech and voice recognition of course. But what else could a zippier CPU do for us here? Well, there are services out there that attempt to look for similar music to your favourites based on various attributes. But that requires a lot of CPU time, as exhaustive pair-wise comparison of large music databases are required. Yes, techniques such as locality sensitive hashing can be used to reduce high dimensional data to a more compact form, but these can be difficult to implement or maintain, and are generally a kludge which won't necessarily be as accurate as sheer brute force. One of the fundamental techniques used before analysing a sound is to first split the signal into a frequency spectrum. This is usually done with STFT/FFT techniques, but with an infinitely fast CPU, one can analyze all possible sets of frequencies, amplitudes, and offsets of individual sine waves, mix them, and see which combination produces a result closest to the given signal window. Some signals/sounds may require one or two sine waves to come close, whilst others may require hundreds or even thousands of mixed sine waves (each with their own amplitudes, phase and frequency) to come close. It would be computationally prohibitive, but apart from being simpler, there's also the chance that brute forcing like may at least partially overcome the 'uncertainty principle' where's there's a compromise between frequency and time resolution. | |
6: Models for a universal theory of nature
Finding something resembling a Theory Of Everything would probably allow many advances in engineering just as the discovery of relativity led to better materials, fission and GPS, or how the discovery of quantum mechanics led to the laser and microchip. We could then see the limits to space travel and know for sure if faster-than-light travel (through wormholes etc.) is attainable. The big question about exactly how the universe began (and even what came before that) can be answered once and for all. | |
5 : Graphics (end user)
Photorealistic (or heavily detailed surreal/fantasy/psychedelic) imagery would become the norm. Games would of course look more glorious. The rendering equation can be solved perfectly, so developers can go overboard with global illumination, caustics, sub pixel sampling, reflections, refractions and atmospheric effects, all with limitless levels of recursion. The creative process would never be hampered by how many polygons or B-Splines were allowed. All video can be made super-smooth too (500 frames per second - approaching the limit of perception). For example, Toy Story 2 took from 2 to 20 hours per frame or five hours to render each frame on average. To render in real-time for a video game (say 60 FPS), you would need a processor that was just over 1,000,000 times faster than what we have today). And that's mostly using Reyes rendering (which incorporates mostly rasterization tecniques with only minimal ray tracing). Actually, maybe we won't have to wait long for some of this extravagance. Technology is slowly beginning to produce ray-traced graphics in realtime. Next stop, path-tracing please. | |
4: Rapid software devlopmentIn programming, there's often a balance to strike between readability/maintainability/modularity and the speed of code. That would all change though with an infinitely fast processor. All code would be written with the former in mind with little or no regard to efficiency. There would be no need, and so software would be much quicker and cheaper to develop. Lower level languages such as assembler, C or even Java can go for walkies. Instead, something more BASIC, Ruby, or Python-like would be the future. Or maybe something more declarative such as Prolog should be used, where one defines the outcome rather than the steps needed to achieve the outcome. One example of simpler algorithm development would be sorting. Suddenly, Bubble Sort would start to make sense. Actually scrap that, Bozo sort, one of the archetypes of bad sorting, could now be the one to go for. In addition, we can stop wasting our time improving the efficiency of previously slow algorithms. For example, the Barnes-Hut simulation algorithm can be thrown out in favour of brute force N-body simulation. All techniques which provide analytical solutions can now be evaluated numerically through sheer Brute Force, and we can finally lift the curse of dimensionality. As a bonus, we can skip the fierce scientific debate about whether developing Metaheuristics are a waste of time *. ;) In terms of programming animation/video, we can forget pixels and frames per second completely, and instead think in terms of time and screen proportions. * <Begin Controversial Statement> (Alternatively, one could compare random or Brute force search with the success of say... genetic algorithms, and solve the debate that way - free lunches are best eaten hot) <End of Controversial Statement>. | |
3: Physics and particles (for entertainment purposes)
With this sort of game engine, expect hyper realistic and interactive world effects such as liquids, bridges, explosions, weather, breakable surroundings, but also many strange and novel visual scenes and gameplay styles such as the manipulation and interaction of semi-liquid jelly-like objects, monopole magnets, unusual explosion effects, reverse black holes, matter conversion, and other such madness. Games can feature bizarre stories such as a battle between Blue Goo to prevent Grey Goo from eating everything in sight, and be realistic if need be. We can redefine the laws of physics itself to our whim. Finally, using atoms and molecules as a basis for virtual reality and games allows calculation of realistic sounds (instead of prerecorded). Only recently has there been an attempt to model realistic sounds such as a dripping water tap. The model isn't perfect due to the complexity of the problem. You can imagine how complicated the acoustics of an ocean may be in comparison... | |
2: Artificial IntelligenceWe're in more speculative territory now, but according to Ray Kurzweil, computers should start to match the speed of the human brain by around 2030 (around 10,000 trillion calculations per second). At that point, we may be able to let humanoids do our housework, and at some point after that, even attain the singularity itself. It's possible all this may happen of course, but the computer's inability to understand aesthetics, or even what even makes a good piece of music, may prevent using this bombshell to automatically and easily create a future paradise, nevermind cure the human condition of unhappiness generally. For the time being, we'll have to make to do with simulating a rat's neocortical column. Because of the speculative nature, AI just missed out on reaching the top spot which goes to... | |
1: Physics and particles (for scientific/engineering purposes)
"Quantum theory in principle allows us to predict the structure and reactivity of all molecules, but the equations of Quantum Theory become intractably complex with increasing system size. Exact analytical solutions are only possible for the smallest systems and for almost all molecules of interest in chemistry and life sciences no such solutions are known to us."
[...]. One creates a geometric description of a wing, for example, and then analyzes the flow over the wing. We know that today supercomputers cannot handle this problem in its full complexity of geometry and physics. We use simplifications in the model and solve approximations as best we can. [...]. Smaller problems can be run on workstations, but "new insights" can only be achieved with increased computing power. One of the most powerful ideas would be to use genetic programming (or rather, a simpler Brute Force search) to find solutions to general-purpose problems. Assuming a complete understanding of the 'theory of everything', the only remaining challenge which computers can't really do, is to define the scoring mechanism (or fitness function as it's known in the AI world). Nanotechnology would get a boost too, as masses of computing power is needed to construct nanotech equivalents of normal size mechanisms such as bolts, screws, valves, wheels, hinges and more complex machinery. Taken from: Frontiers of Supercomputing II - Chapter 8 - THE FUTURE COMPUTING ENVIRONMENT - Molecular Nanotechnology (Ralph Merkle)
| |