> The reverse-engineering solves one mystery about the flip flop ... Looking at the reverse-engineered schematic, though, explains that a sharp pulse on the J pin will act like the clock, sending a pulse through the capacitor, turning off the transistor, and causing a high output. I assume this behavior is not intentional, and J inputs are expected not to transition as sharply as when I touched it with a ground wire.
This paragraph caused chills to run up my spine. The fact that they used AC coupling on the clock line seemed clever because it saved a couple of transistors, but the fact that they depended on J and K always being "slow" was cringeworthy. If J and K were somehow not slow, the device would become "not a proper flip flop." And yet it went to the Moon, so I guess they knew something I don't.
One other thought: When Ken grounded the J line, he was attaching it to essentially zero impedance to ground. The clock line will never have zero impedance, but it's reasonable to expect it to have lower impedance than regular signal lines. The sensitivity on J and K might be more a function of signal impedance than rise/fall time per se. Of course the two things are not unrelated. Potato/potahto.
Note that this box did not go to the moon, I don't think it would have been in the signal path(if I understand correctly it is a ground unit used to run unit tests on the radio transmission gear). Now as to whether similar loosely specified flipflops were in flight hardware, I have no idea.
I also will note that even the much vaunted Apollo guidance computer was a case of worse is better. The Saturn 5, now that had a proper flight computer(the LVDC), designed by IBM, redundant everything, A computer that could not fail. The AGC in contrast not only had no internal redundancy, it was designed to fail (in case of error the system could very quickly reboot and resume running it's program where it left off). However the trade off was that the AGC was half the weight and twice the speed as the LVDC and they could carry two of the things to get the reliability they wanted.
Fresh out of university as an EE in my first job I was surprised to find that the rule was "if it conforms to the tests then it works". Sometimes the tests were poorly designed or were narrow enough only to test the happy paths. Box ticked, ship to customer.
I eventually moved to software at which point I discovered that it's even worse here. It's at least 10^6x more difficult to kill people though and you usually don't have to get off your chair to undo the carnage.
Yes in software we don't even have to worry about trying to get the magic smoke back into the chip (very difficult).
> "if it conforms to the tests then it works".
Um, but what if someone does something outside of the tests.
"Well, then it breaks".
Yes, but this is software, how does it break?
"Hopefully not badly and insecurely"
Often its "If it appears to pass the test it ships".
Before the flip-flops implemented as monolithic integrated circuits, all the flip-flops used AC-coupled clock inputs that required fast input clock edges and the correct operation of the flip-flops depended critically on various timing parameters.
The reason for this is that it saved much more than a couple of transistors.
Most flip-flops with relaxed timing requirements are of the so-called master-slave type, and the most frequent implementation variants need either 4 times or 3 times more transistors than are needed for an AC-coupled flip-flop, for which 2 transistors may be enough in its simplest variant.
Even the first integrated flip-flops have used schematics similar to the earlier discrete flip-flops, until they were replaced by later models, using more transistors, such as the TTL 54/7472, 54/7473, 54/7474, 54/7476.
Fair enough. I learned this subject during the TTL era after transistors got cheap. Capacitors were "ugly analog things" that were only used for Vcc buffering.
I never knew AC coupled clocks happened before DC coupled clocks. Thanks!
Good comments. I don't have details on impedance vs fall time so I don't know how big the safety margin was. But this behavior caused me so much confusion when I was testing the module to reverse engineer it.
By the way, this modules wasn't used in flight; it was part of a test box that was used on the ground. The Updata Link box onboard the spacecraft was built with different technology. Just want to avoid confusion :-)
Author here for your X-ray questions...
No questions just appreciation. Your blog and Marc’s channel are about the most interesting things I’ve read and watched for years (as an ex EE). Pure quality content. Thank you.
Thanks for the nice comment!
What kind of file format the CT machine produce? DICOM? What's the spatial resolution of the model? How big was the file for this component? Could you make the file available? Are they using webgl to display the volume in 3D?
The system is web-based so I don't know about the underlying file format. They've downloaded files to a 3-D printer so it's something usable. The spatial resolution depends on the size of the object (which determines how close it is to the sensor). In one scan we saw the bond wires inside a transistor in a unit, so the resolution can be very good. For a large, dense metal object the resolution is lower.
This scan is online at: https://app.lumafield.com/project/afa60fd5-308d-41da-a0c6-14... You can manipulate the scan yourself after creating an account.
So apparently the volume is loaded in 3 chunks 95MB (285M). Looking at the code they are indeed using WebGL (through Three.js) for the raytracing. I don't think it's a DICOM format, at least not on the frontend side as I don't see the usual DICOM fields used in 3D rendering in the code. They have multiple versions of React bundled, I don't know what's the story about that.
Anyway, thanks for the link, nice piece of technology.
I asked Lumafield. They say that the can export mesh files for 3D printing, but they don't support DICOM files.
My dad described using modules like this. Glad to see some confirming evidence.
Yes, hybrid modules like this were popular in the 1960s and produced by multiple manufacturers. Eventually, of course, integrated circuits replaced them for most applications.
Why do you refer to them as "hybrid" modules. Seems like a standard transistor logic circuit of the era to me.
That's what they called modules that were built from active and passive components. As opposed to a "monolithic" integrated circuit.
I still think you are using the term anachronistically. The first commercial hybrid integrated circuit was the Type 502 Binary Flip-Flop in March 1960 which was a very different beast. The first commercial monolithic integrated circuit, a NOR gate, didn't appear until 1961.
Obviously you are right in the sense that the word "hybrid" cannot be used to describe something as being made by 2 techniques, until both techniques exist.
So indeed the word "hybrid" could have been used only after the appearance of the first "monolithic" integrated circuits, to distinguish between the 2 types of integrated circuits.
I do not remember now if this is true or not, but the term "integrated circuit" could have been used before the first monolithic integrated circuit, to designate such modules assembled from various discrete components, which were then used as components themselves.
A few years later, the term "hybrid integrated circuit" began to mean specifically a device made with a substrate, usually ceramic, on which 1 or more dies of monolithic integrated circuits were attached, together with some discrete components, e.g. transistors, resistors, capacitors, inductors etc., and all were interconnected using thick-film or thin-film technology, before being packaged for environmental protection.
A device made in the same technology, but without any dies of monolithic integrated circuits, would have been named simply as a "thin-film integrated circuit" or "thick-film integrated circuit", as it was not a hybrid.
The flip-flop module described here is definitely an "integrated circuit", but it uses none of the more modern interconnection technologies used in the more recent integrated circuits, i.e. it is neither monolithic nor hybrid nor thin-film nor thick-film.
During the last decades, most integrated circuits have been monolithic, because that is the cheapest interconnection technology per component device, so now people presume when hearing just "integrated circuit" that it is a monolithic integrated circuit.
Such an old integrated circuit, for which none of the later specific terms are applicable, might be called now as a "non-monolithic integrated circuit" or just as a "flip-flop module".
How about "cordwood module" then?
Technically, a hybrid integrated circuit is just a bunch of individual components that are integrated together into a single package vs a monolithic integrated circuit which is your typical "everything on a chip of silicon".
> Cloud software then generates a 3-D representation from the X-rays.
Paying possibly hundreds of thousands of dollars for a machine and still not being able to operate it independently of the producer because a significant part of the software is “on the cloud” is not a good thing.
BTW: anyone interested in DIY such a Xray CT? Sounds more and more doable with more flat panel detectors retired and go to ebay, but reversing engineering to drive them are not very easy.