hckrnws
See Drexler's mechanical nanotechnology from 1989.[1]
There's a minimum size at which such mechanisms will work, and it's bigger than transistors. This won't scale down to single atoms, according to chemists.
[1] http://www.nanoindustries.com/nanojbl/NanoConProc/nanocon2.h...
It seems like you've misremembered the situation somewhat.
Merkle developed several of his families of mechanical logic, including this one, in order to answer some criticisms of Drexler's earliest mechanical nanotechnology proposals. Specifically:
1. Chemists were concerned that rod logic knobs touching each other would form chemical bonds and remain stuck together, rather than disengaging for the next clock cycle. (Macroscopic metal parts usually don't work this way, though "cold welding" is a thing, especially in space.) So this proposal‚ like some earlier ones like Merkle's buckling-spring logic, avoids any contact between unconnected parts of the mechanism, whether sliding or coming into and out of contact.
2. Someone calculated the power density of one of Drexler's early proposals and found that it exceeded the power density of high explosives during detonation, which obviously poses significant challenges for mechanism durability. You could just run them many orders of magnitude slower, but Merkle tackled the issue instead by designing reversible logic families which can dissipate arbitrarily little power per logic operation, only dissipating energy to erase stored bits.
So, there's nothing preventing this kind of mechanism from scaling down to single atoms, and we already have working mechanisms like the atomic force microscope which demonstrate that even intermittent single-atom contact can work mechanically in just the way you'd expect it to from your macroscopic intuition. Moreover, the de Broglie wavelength of a baryon is enormously shorter than the de Broglie wavelength of an electron, so in fact mechanical logic (which works by moving around baryons) can scale down further than electronic logic, which is already running into Heisenberg problems with current semiconductor fabrication technology.
Also, by the way, thanks to the work for which Boyer and Walker got part of the 01997 Nobel Prize in Chemistry, we probably know how ATP synthase works now, and it seems to work in a fairly similar way: https://www.youtube.com/watch?v=kXpzp4RDGJI
> Chemists were concerned that rod logic knobs touching each other would form chemical bonds and remain stuck together, rather than disengaging for the next clock cycle.
The rod & shaft designs are passivated. This kind of reaction wouldn't happen unless you drove the system to way higher energies than were ever considered.
> Someone calculated the power density of one of Drexler's early proposals and found that it exceeded the power density of high explosives during detonation
I think this is a (persistent) misunderstanding. His original work involved rods moving at mere millimeters per second. There are a number of reasons for this, of which heat dissipation is one. However all the molecular mechanics simulations done operate at close to the speed of sound in the material, simply because they would otherwise be incalculable. There is sadly a maximum step size for MD simulations that is orders of magnitude lower than what you would need to run at realistic speeds.
> You could just run them many orders of magnitude slower, but Merkle tackled the issue instead by designing reversible logic families which can dissipate arbitrarily little power per logic operation, only dissipating energy to erase stored bits.
The rod logic stuff is supposed to be reversible too. Turns out it isn't though. But it could be close enough if operated at very low speeds or very low temperatures.
The rod logic stuff is WAY smaller than the rotational logic gates in TFA. For some applications that matters, a lot.
If you are going to go for the scale of these rotational systems, you might as well go electronic.
I appreciate the corrections, but what scale do you think the link-and-pivot systems are intended to operate at?
(Also, passivation doesn't eliminate van der Waals bonds.)
>mechanical logic (which works by moving around baryons) can scale down further than electronic logic, which is already running into Heisenberg problems with current semiconductor fabrication technology.
I think I must be missing something here, I thought this was working with atoms. Are you saying that someday mechanical logic could be made to work inside the nucleus? Seems like you might be limited to ~200 nucleons per atom, and then you'd have to transmit whatever data you computed outside the nucleus to the nucleus in the next atom over? Or are we talking about converting neutron stars into computing devices? Do you have a good source for further reading?
No, no, not at all! That kind of thing is very speculative, and I don't think anybody knows very much about it. What I'm saying is that the position of a nucleus is very, very much more precisely measurable than the position of an electron, so it has a much weaker tendency to tunnel to places you don't want it to be, causing computation errors. That allows you to store more bits in a given volume, and possibly do more computation in a given volume, if the entropy production mechanisms can be tamed.
We routinely force electrons to tunnel through about ten nanometers of silicon dioxide to write to Flash memory (Fowler–Nordheim tunneling) using only on the order of 10–20 volts. That's about 60 atoms' worth of glass, and the position of each of those atoms is nailed down to only a tiny fraction of its bond length. So you can see that the positional uncertainty of the electrons is three or four orders of magnitude larger than the positional uncertainty of the atomic nuclei.
The interesting question is how much energy is lost to mechanical friction for a single logic operation, and how this compares to static leakage losses in electronic circuits. It should also be noted that mechanical logic may turn out to be quite useful for specialized purposes as part of ordinary electronic devices, such as using nano-relay switches for power gating or as a kind of non-volatile memory.
That's one of many interesting questions, but avoiding it is why Merkle designed his reversible logic families in such a way that no mechanical friction is involved, because there is no sliding contact. There are still potentially other kinds of losses, though.
And why wouldn't it work? Linear slide like mechanisms consisting of a silver surface and single molecule have been demonstrated[0]. The molecule only moved along rows of the silver surface. It was demonstrated to stay in one of these grooves up to 150 nm. A huge distance at this scale.
It can work (see my sibling comment) but it's tricky. The experiment you link was done under ultra-high vacuum and at low temperatures (below 7 K), using a quite exotic molecule which is, as I understand it, covered in halogens to combat the "sticky fingers" problem.
You seem to be knowledgeable about this topic. The reversible component designs in the article appear to presuppose a clock signal without much else said about it. I get that someone might be able to prototype an individual gate, but is the implementation of a practical clock distribution network at molecular scales reasonable to take for granted?
I'm only acquainted with the basics of the topic, not really knowledgeable. It's an interesting question. I don't think the scale poses any problem—the smaller the scale is, the easier it is to distribute the clock—but there might be some interesting problems related to distributing the clock losslessly.
Not an expert, but would this count as molecular scale :)?
https://en.wikipedia.org/wiki/Chemical_clock
(This version can be done at home with halides imho: https://en.wikipedia.org/wiki/Iodine_clock_reaction)
To your question: I suppose all you need is for the halide moieties (Br) in your gates to also couple to the halide ions (Br clock?). The experiment you link was conducted at 7K for the benefit of being able to observe it with STM?
That's a different kind of clock, and its clock mechanism is a gradual and somewhat random decrease in the concentration of one reagent until it crosses a threshold which changes the equilibrium constant of iodine. It isn't really related to the kind of clock you use for digital logic design, which is a periodic oscillation whose purpose is generally to make your design insensitive to glitches. Usually you care about glitches because they could cause incorrect state transitions, but in this case the primary concern is that they would cause irreversible power dissipation.
The experiment was conducted at 7K so the molecule would stick to the metal instead of shaking around randomly like a punk in a mosh pit and then flying off into space.
Yeah you're probably right about the clocks but I hope that wouldn't stop people from trying :)
>The experiment was conducted at 7K so the molecule
Br is good at sticking to Ag so I suspect the 7K is mainly (besides issues connected to their AFM^W STM setup) because the Euro dudes love ORNL's cryo engineering :)
Br's orbitals are filled here because it's covalently bonded to a carbon, so it's basically krypton. Experiments with moving atoms around on surfaces with STMs are always done at cryogenic temperatures because that's the only way to do them.
>. Hence, the Br atoms kept the molecules on track, likely because their interaction with the surface substantially contributed to the barrier for molecular rotation
Yeah that's a reason people prefer AFM (but then they won't be able to do manipulation)?
[Br- is a "good leaving group", not so much at 7K maybe. You are also right in that, above all, they don't want their molecule sticking (irreversibly) to the (tungsten) tip ]
Not entirely.. terminal Br were also required to keep the molecule on the Silver tracks..
Those are some of the halogens I'm talking about. It's a little more polarizable than the covalently-bonded fluorine, so you get more of a van der Waals attraction, but still only a very weak one.
I'd love to see a watch manufacturer try to build a watch-sized purely mechanical computer
That's clearly feasible; the mechanical complexity for a mechanical computer is on the order of a Curta calculator, and I outlined some promising approaches to macroscopic mechanical digital logic 15 years ago in https://dercuano.github.io/notes/mechanical-computers.html. Since then MEMS has advanced significantly and gone mainstream, and photolithographic and reactive-ion-etching-based silicon fabrication has been used for other purposes, including watchmaking, with macroscopic silicon flexure components going into first TAG Heuer's Guy Sémon's Zenith Oscillator in the Zenith Defy Lab https://www.hodinkee.com/articles/zenith-defy-lab-oscillator... and then mainstream watches:
https://wornandwound.com/no-escapement-an-overview-of-obtain...
https://monochrome-watches.com/in-depth-the-future-of-silico...
https://www.chrono24.com/magazine/innovative-escapements-fro... (warning, GDPR mugging)
https://www.azom.com/article.aspx?ArticleID=21921
https://www.europastar.com/the-watch-files/those-who-innovat...
This is very interesting because according to one of the authors of the mechanical computing paper(personal communication) they never dynamically simulated the mechanisms. It was purely kinematic. So this web browser simulation is new work. Reversibility might disappear once dynamics are modelled.
Indeed. The web simulation clearly applies damping, which is an irreversible element. A truly reversible process should probably be built around minimally-damped oscillating elements, so that the stored energy never needs to dissipate.
Even if damping is removed they might not be reversible. Logic gates that were found to be individually reversible, were found to have difficulties operating when connected in a circuit: https://core.ac.uk/download/pdf/603242297.pdf
It's impossible to avoid incurring some losses at finite speed, but as far as I know there is nothing fundamental preventing one from approaching reversible operation when operating at a sufficiently slow (but nonzero) speed.
For things like machine learning I wonder how much extra performance could be squeezed out by simply working with continuous floating values on the analog level instead of encoding them as bits through a big indirect network of nands.
This is something that has been tried, basically constructing an analog matrix multiply/dot product and it gives reasonable power efficiency at into levels of precision. More precision and the analog accuracy leads to dramatic power efficiey losses (each bit is about 2x the power), so int8 is probably the sweet spot. The main issues are it is pretty inflexible and costly to design vs a digital int8 mac array, and hard to port to newer nodes, etc
I have wondered this and occasionally seen some related news.
Transistors can do more than on and off, there is also the linear region of operation where the gate voltage allows a proportional current to flow.
So you would be constructing an analog computer. Perhaps in operation it would resemble a meat computer (brain) a little more, as the activation potential of a neuron is some analog signal from another neuron. (I think? Because a weak activation might trigger half the outputs of a neuron, and a strong activation might trigger all outputs)
I don’t think we know how to construct such a computer, or how it would perform set computations. Like the weights in the neural net become something like capacitance at the gates of transistors. Computation is I suppose just inference, or thinking?
Maybe with the help of LLM tools we will be able to design such things. So far as I know there is nothing like an analog FPGA where you program the weights instead of whatever you do to an FPGA… making or breaking connections and telling LUTs their identity
You don't think we know how to construct an analog computer? We have decades of experience designing analog computers to run fire control systems for large guns.
We have also a pretty decent amount of experience with (pulse/spiking) artificial neural networks in analog hardware, e.g. [1]. Very Energy efficient but yet hard to scale.
[1] https://www.kip.uni-heidelberg.de/Veroeffentlichungen/detail...
That’s a very cool abstract, thanks. I suppose it’s the plasticity that poses a pretty serious challenge.
Anyway, if this kind of computer was so great maybe we should just encourage people to employ the human reproduction system to make more.
There’s a certain irony of critics of current AI. Yes, these systems lack certain capabilities that humans possess, it’s true! Maybe we should make sure we keep it that way?
It's possible, but analog multiplication is hard and small analog circuits tend to be very noisy. I think there is a startup working on making an accelerator chip that is based on this principle, though.
There are optical accelerators on the market that - I believe - do that already, such as https://qant.com/photonic-computing/
TLC,QLC,MLC in ssd is it. so it is used already. and it gives you limits of current technology.
>*TLC,QLC,MLC"
For those unaware of these acronyms (me):
TLC = Triple-Layer Cell
QLC = Quad-Level Cell
MLC = Multi-Level Cell
For those unaware, MLC used to mean mean Two-level cell.
Quad-level is the current practical maximum.
You lose a lot of stability. Each operation's result is slightly off, and the error accumulates and compounds. For deep learning in particular, many operations are carried in sequence and the error rates can become inacceptable.
Deep learning is actually very tolerant to imprecision, which is why it is typically given as an application for analog computing.
It is already common practice to deliberately inject noise into the network (dropout) at rates up to 50% in order to prevent overfitting.
Isn't it just for inference? Also, differentiating thru an analog circuit looks... interesting. Keep inputs constant, wiggle one weight a bit, store how the output changed, go to the next weight, repeat. Is there something more efficient, I wonder.
Definitely, if your analog substrate is implementing matrix vector multiplications (one of the most common approaches in this area). Then your differentiation algorithm is the usual backpropagation, which has rank 1 weight updates. With some architectures this can be implemented in O(1) time simply by running the circuit in "reverse" configuration (inputs become outputs and vice-versa). With ideal analog devices, this would be many orders of magnitude more efficient than a GPU.
Actually kinda impressive that a current CPU is "only" 9 orders of magnitude from the ridiculously low theoretical minimum energy needed per "floating point operation" (kinda fuzzy, but who's counting at 9 orders of magnitude?). The efficiency difference between the first computers and SOTA CPUs is probably more than 9 orders of magnitude - actually, it seems to me that it's in the ballpark of 9 orders of magnitude.
This is the concept behind the computers in The Diamond Age right ? Or am I mistaken ?
It's very similiar. The rod logic in diamond age (Eric Drexler was the one who originally came up with it) moves linearly -- not rotationally like this does. It's also reversible.
These simulations are great. With the balance example if you move the clock back and forth at resonance frequency, you can get it really twisted up so it doesn't work: https://postimg.cc/xXFgbxtn
A great pedagogical article on thermodynamic vs logical reversibility, for those interested: https://arxiv.org/abs/1311.1886 (Sagawa 2014).
Sagawa was mistaken in this article; he failed to appreciate the role of mutual information in computing, which is the proper basis for understanding Landauer's principle. I discussed this in https://www.mdpi.com/1099-4300/23/6/701.
I love this, I have a friend who has a 3d printer and we started looking at the repo for this project to print the mechanical shift register and play with a physical toy representation of storing a bit of data: https://github.com/mattmoses/MechanicalComputingSystems
Edit: I love that other people are thinking about this around now
Now that I think of it, if using damped springs, the system would not be reversible. Energy is dissipated through the damping, and the system will increase in entropy and converge on a local energy minimum point.
Another way of looking at it: there are 4 states going in (0 or 1 on 2 pushers) but there are only 2 states of the 'memory' contraption, so you lose a bit on every iteration (like classical Boolean circuits)
Comment was deleted :(
Classical reversible computing feels like it would be a good way to interface with a quantum computer (since it's also reversible in theory).
Quantum computation came directly out of reversible computing. Look for example at the Fredkin and Toffoli gates.
for the silicon kind of reversible computing, see vaire.co. they're in tapeout.
My most exciting prediction is that we'll be able to some day have Optical Computing LLMs, where we grow crystals (instead of silicon wafers) such that they contain wave-guides in which a single ray of light is what makes up each "input" into a Neural Net, and as it shines thru, it does the "logic" of a MLP. The summation part is not hard because light naturally superposes to "add", but the multiplications and activation functions will be the hardest part.
But ideally once manufactured, a given LLM "model" will be a single solid crystal, such that shining an array of beams into it, will come out the other end of this complex crystal as an "inference" result. This will mean an LLM that consumes ZERO ENERGY, and made of glass will also basically last forever too.
We already have Optical Chips but they don't quite do what I'm saying. What I'm saying is essentially an "Analog LLM" where all the vector adds, mults, and tanh functions are done by the light interactions. It seems possible, but I think it's doable.I think there should theoretically be a "lens shape" that does an activation function, for example. Even if we have to do the multiplications by conventional chips, in a hybrid "silicon-wave system" such an "Analog Optical LLM" would still have huge performance and energy savings, and millions of times faster than today's tech.
And being based on light, could utilize quantum effects so that the whole thing can become a Quantum Computer as well. We could use effects like polarization and photon spin perhaps to even have 100s of inferences happening simultaneously thru a given apparatus, as long as wavelengths are different enough to not interact.
ပပကက
> Specifically, the Landauer’s principle states that all non-physically-reversible computation operations consume at least 10^21 J of energy at room temperature (and less as the temperature drops).
Wow! What an absurd claim!
I checked the Wikipedia page and I think you actually meant 10^-21 J :)
Fix! Ty!
P.S. I once calculated the mass of the sun as 0.7kg and got 9/10 points on the questions.
FYI, total global energy production is a lot less than 10^21 J. It's south of 10^19 from what I can google...
Depends on which aspects of energy production you're concerned with and over what time period. Global marketed energy production is about 18 terawatts, which is about 10²¹ J every year and 9 months. The energy globally produced by sunlight hitting the Earth, mostly as low-grade heat, is on the order of 100 petawatts, which is 10²¹ J every hour and a half or so. Global agriculture is in between these numbers.
[flagged]
[dead]
This says:
> The Technology Behind Remote Mind Reading
> The technology can be broken into multiple parts: the ability to read brainwaves remotely; the ability to decode brainwaves; the ability to beam back signals to the brain to influence it; the ability to apply this from a long distance; a mechanism for automation (to be able to apply it to a large number of victims); and an infrastructure for population-scale deployment.
> Believe it or not, every single one of those exists, and I will provide well-founded explanations in tangible details.
This doesn't seem to be related to reversible mechanical computing; rather, it seems to be a classic schizophrenic delusion. The evidence presented is a rather implausible patent from 01974 and a poster in all capital letters on "psycho-electronic weapon effects".
Presumably at some point this kind of thing will become possible, but no evidence is presented here showing that it has become possible already, and from what I know of the relevant fields, I don't see a way it would be possible today. It certainly wasn't possible in 01974 when the cited patent was published. The inventor was engaging in wishful, or perhaps schizophrenic, thinking.
By contrast, as a delusion, this belief has been a well-known symptom of schizophrenia since at least 01919: https://en.wikipedia.org/wiki/Thought_broadcasting https://archive.org/details/dementiapraecoxp00kraeiala/page/...
> But it is quite specially peculiar to dementia praecox that the patients' own thoughts appear to them to be spoken aloud. In the most varied expressions we hear the complaint of the patients constantly repeated that their thoughts can be perceived. They are said loud out [sic], sometimes beforehand, sometimes afterwards; it is "double speech," the "voice trial," "track-oratory," the "apparatus for reading thoughts," the "memorandum." A patient heard her thoughts sounding out of noises. In consequence of this everything is made public. What the patients think is known in their own homes and is proclaimed to everyone, so that their thoughts are common property. "I have the feeling, as if some one beside me said out loud what I think," said a patient. "As soon as the thought is in my head, they know it too," explained another. "When I think anything I hear it immediately," said a third. People look into the brain of the patient, his "head is revealed." When he reads the newspapers, others hear it, so that he cannot think alone any longer. "We can read more quickly than you," the voices called out to a patient. "Everyone can read my thoughts, I can't do that," complained a patient. Another said, "A person can have his thoughts traced by another, so that people can learn everything." A patient himself had "to whistle" his secrets "through his nose."
So, if you find yourself suspecting that there's a vast secret conspiracy to remotely read your mind, try to remember that that's a common delusion that literally millions of people have had, and most of them had it decades ago before there was any way it was possible, and it probably still isn't possible. If it were possible, there are a lot of more prosocial things that people would use it for: lie detector tests in job interviews and lawsuits, for example, or matchmaking services.
Sometimes combating delusions with reason doesn't work. Delusions can be extremely compelling! But sometimes it does.
Here's another excerpt from p. 13. Again, remember that this book is from 106 years ago, so telephones were the height of communications technology, radio was called "wireless telegraphy", and there was no CIA or KGB to blame these experiences on; "a professor" was the closest equivalent.
> The patients frequently connect them [their strange experiences of hearing voices and feeling that others can read their thoughts] with malevolent people by whom they are "watched through the telephone," or connected up by wireless telegraphy or by Tesla currents. Their thoughts are conveyed by a machine, there is a "mechanical arrangement," "a sort of little conveyance," telepathy. A patient said, "I don't know the man who suggests that to me." Another supposed that it might perhaps be done for scientific purposes by a professor. A third explained, "I am perfectly sane and feel myself treated as a lunatic, while hallucinations are "brought to me by magnetism and electricity."
A big problem with the idea of physical reversible computing is the assumption that you get to start with a blank tape. Blank tapes are trivial to acquire if I can erase bits, but if I start with a tape in some state, creating working space in memory reverisbly is equivalent (identical) to lossless compression, which is not generally achievable.
If you start with blank tape then it isn't really reversible computing, you're just doing erasure up front.
Yes, but erasing the tape once is much better than erasing the tape many times over.
I don't think your criticism is applicable to any reversible-computing schemes that I've seen proposed, including this one. They don't assume that you get to start with a blank memory (tapelike or otherwise); rather, they propose approaches to constructing a memory device in a known state, out of atoms.
What do you think you're saying here? Building a memory device in a known configuration is erasing bits.
Yes, building a memory device in a known configuration is erasing bits. Once you've built it, you can use it until it breaks. As long as you decompute the bits you've temporarily stored in it, restoring it to its original configuration, you don't inherently have to dissipate any energy to use it. You can reuse it an arbitrarily large number of times after building it once. If you want to compute some kind of final result that you store, rather than decomputing it, that does cost you energy in the long run, but that energy can be arbitrarily small compared to the computation that was required to reach it.
Consider the case, for example, of cracking an encryption key; each time you try an incorrect key, you reverse the whole computation. It's only when you hit on the right key that you store a 1 bit indicating success and a copy of the cracked key; then you reverse the last encryption attempt, leaving only the key. Maybe you've done 2¹²⁸ trial encryptions, each requiring 2¹³ bit operations, for a total of 2¹⁴¹ bit operations of reversible computation, but you only need to store 2⁷ bits to get the benefit, a savings of 2¹³⁵×.
Most practical computations don't enjoy quite such a staggering reduction in thermodynamic entropy from reversible computation, but a few orders of magnitude is commonplace.
It sounds like you could benefit from reading an introduction to the field. Though I may be biased, I can recommend Michael Frank's introduction from 20 years ago: https://web1.eng.famu.fsu.edu/~mpf/ip1-Frank.pdf
Thanks for the shout-out, Kragen!
A more complete resource for finding my work count be found at https://revcomp.info.
What would you recommend newcomers read as an introduction to the field today? Would it be one of your own papers, or has someone else written an overview you'd recommend?
I didn't realize you'd left Sandia! I hope everything is going well.
Crafted by Rajat
Source Code