> Normally, this kind of project would require approximately three months for a skilled human engineer (approximately 430 hours of work).
Creative marketing speak. Its most likely true in a corporate environment with a teams trying to coordinate their little fiefdoms, but not the case for a single engineer. Overestimated by ~one order of magnitude.
>With just one week of AI-powered processing, augmented by 38.5 hours of human expert assistance, the Project Speedrun computer was completed.
"Total time to layout ~38 hours." - _13 years ago_, nowadays most of the things one would struggle back then got automated. 40 hours for Zync to DDR3 interface, what is left are power supplies and low speed stuff. Overview of the project https://www.youtube.com/watch?v=jU2aHMbiAkU
It took Ben almost as long to cleanup after AI as it took Tesla500 to design SOM from the ground up when DDR3 was still quite new and state of the art.
>Engineers preferred larger polygons for power distribution than Quilter originally produced. Enlarging these pours required opening space, shifting traces, and re-routing small regions to accommodate the changes.
No kidding, their tool generated nice fat power traces up to the first tight spot, and then gave up and bam 2mil tracks (VDDA_1V8 VDD_1V8) :D almost un-manufacturable at jlcpcb/pcbway (they have asterisks at 2mil) and very bad for power distribution (brownouts).
>The goal was to match human comfort levels for power-distribution robustness.
nah, in this particular case the goal was making it manufacturable and able to function at all. Human replaced those hilarious 2 mil traces with proper 15 mil ones. And you cant just click on a track and brrrrt it from 2 to 15mil as they themselves admit:
>Enlarging polygons often required freeing routing channels, which triggered additional micro-moves and refinements
Human EE had to go in, rip out more than half (the actually time consuming half) of the generated garbage and lay it out manually. Those "micro-moves" involved completely re-arranging layer stack moving whole swaths of signals to different layers, shuffling vias etc.
>Once via delays were included, several tuned routes no longer met their targets. The team re-balanced these nets manually.
"re-balanced" being colloquialism for ripped all the actually difficult to route parts and re-did manually.
AI didnt even try to length match flash. Just autorouted like you would 8MHz Arduino board.
ENET_TD2 - what the hell happened there? :D Signal is doing a loop/knot over itself while crossing 3 layers, Ben was probably too tired of AI shenanigans at this point and didnt catch it instead elongating ENET_TD1 to length match this lemon.
Comparing SOM AI output vs human "expert assistance" there is very little left from the AI. Almost every important track was touched/re-done from scratch by human hand. Ben (or another EE they didnt mention) did an amazing job salvaging this design into something actually working.
This is my impression after a quick glance. I didnt try looking for problems very hard, didnt look into component placement (caps, would required reading datasheets) or ran any FEM tools.
AFAICT the tool routed the PCB from an existing schematic. It did not "design" the computer.
NXP publishes full schematics and CAD files for this platform, originally designed in Cadence Allegro. Our goal was to keep the schematic identical and prove out only the layout portion with Quilter. That gave us a clear baseline: if the board didn't work, it would be due to our layout.
In my experience, if you are trying to make a quality product in a complex space, it takes as long to fix autorouted stuff as it does to do it yourself (with some exceptions). I have no doubt that the autorouted stuff will work… but it won’t be as robust .
Aging, thermal cycling, signal emissions, signal corruption, reliability, testability, failure dynamics, and a hundred other manufacturing, maintenance, usability, and reliability profiles are subtly affected by placement and layout that one learns to intuit over the years.
I’m not saying that AI can’t capture that eventually, but I am saying that just following simple heuristics and ensuring DRC compliance only gets you 80 percent of the way there.
There is as much work in getting the next 15 percent as there was in the first 80, and often requires a clean slate if the subtleties weren’t properly anticipated in the first pass. The same stands for the next 4 percent. The last 1 percent is a unicorn. You’re always left with avoidable compromises.
For simple stuff where there is plenty of room, you can get great results with automation. For complex and dense elements, automation is very useful but is a tool wielded with caution in the context of a carefully considered strategy in emc, thermal, and signal integrity trade offs. When ther is strong cost pressure it adds a confounding element at every step as well.
In short, yes, it will boot. No, it will not be as performant when longevity, speed, cost, and reliability is exhaustively characterized. Eventually it may be possible to use AI to produce an equivalent product, but until we have an exhaustive training set of “golden boards” and their schematics to use as a training set, it will continue to require significant human intervention.
Unfortunately, well routed, complex boards are typically coveted and carefully guarded IP, and most of the the stuff that is significantly complex yet freely and openly available in the wild is still in the first 80percent, if even. The majority of circuit boards in the wild are either sub-optimally engineered or are under so much cost pressure that everything else is bent to fit that lens. Neither one of those categories make good training data, even if you could get the gerbers.
It is a reasonable place to start. So much so that autorouters have been around for practically as long as computers have, and they've been better at it than people for most of that time.
The only reason people usually route PCBs is that defining the constraints for an autorouter is generally more work than just manually routing a small PCB, but within semiconductors autorouting overtook manual routing decades ago.
it is surprising (or not?) that there is such a vast gulf in terms of automated tooling between the semiconductor world and pcb routing world.
i guess maybe there are less degrees of freedom and more 'regularity' in the semiconductor space? sort of like a fish swimming in an amorphous ocean vs. having to navigate uneven terrain with legs and feet. the fish in some sense is operating in a much more 'elegant' space, and that is reflected in the (beautiful?) simplicity of fish vs. all the weird 'nonlinear' appendages sticking out of terrestrial animals - the guys who walk are facing a more complicated problem space.
i guess with pcbs you have 'weird' or annoying constraints like package dimensions, via size, hole size, trace thickness, limited layer count, etc.
We chose to base our System-on-Module (SOM) + baseboard designs on the NXP i.MX 8M Mini evaluation platform Staff Electrical Engineer Ben Jordan prepared the design and constraints for the boards and submitted the jobs. Quilter ran parallel seeded runs with varied constraints, completing the layout in 27 hours, returning multiple ranked candidates.
Quilter took care of the repetitive design work while the engineer stayed in control. Automation handled placement, routing, and physics checks, freeing him to focus on firmware prep, documentation, and constraint refinement. Common supply-chain hiccups—a few connectors out of stock and a Wi-Fi module dropped—were resolved instantly, with no delay to iteration. Cleanup was minimal: PDN pours, via clusters, and minor footprint swaps—no rip-ups, no re-spins.
Holy clickbait, batman! The hard parts were done for them! All the fast signals like DDR are on the SoM, designed by real humans who understand EE. To make it all even more of a lie, their design is basically a copy of the reference base-board for the SoM.
"Boots on first attempt" well, duh! the SoM is self-contained. It boots all by itself as is... so no wonder that it boots.
No EMC results either. Making things work is 10% of the work. Passing certs on unintended emissions and making it stable is the other 150%.
My reading of this is that they asked the system to redesign the PCB that was used in the i.MX 8M reference system-on-module. It looks like they take a parts list, a PCB shape, and a rough floorplan and pass that to their tool, which spits out a PCB design.
I could actually see myself using this tool, as someone who trained as an EE and still likes to tinker with electronics. It would be fun to just assemble a parts list and a rough layout and then receive a working electronic device a few weeks later with minimal work.
Creative marketing speak. Its most likely true in a corporate environment with a teams trying to coordinate their little fiefdoms, but not the case for a single engineer. Overestimated by ~one order of magnitude.
>With just one week of AI-powered processing, augmented by 38.5 hours of human expert assistance, the Project Speedrun computer was completed.
40 hours of human expert supervising. For reference https://www.kickstarter.com/projects/1714585446/chronos-14-h... You can watch layout process time lapse of the most difficult part of this products PCB by creator Tesla500 https://www.youtube.com/watch?v=41r3kKm_FME
"Total time to layout ~38 hours." - _13 years ago_, nowadays most of the things one would struggle back then got automated. 40 hours for Zync to DDR3 interface, what is left are power supplies and low speed stuff. Overview of the project https://www.youtube.com/watch?v=jU2aHMbiAkU
It took Ben almost as long to cleanup after AI as it took Tesla500 to design SOM from the ground up when DDR3 was still quite new and state of the art.
>Engineers preferred larger polygons for power distribution than Quilter originally produced. Enlarging these pours required opening space, shifting traces, and re-routing small regions to accommodate the changes.
No kidding, their tool generated nice fat power traces up to the first tight spot, and then gave up and bam 2mil tracks (VDDA_1V8 VDD_1V8) :D almost un-manufacturable at jlcpcb/pcbway (they have asterisks at 2mil) and very bad for power distribution (brownouts).
>The goal was to match human comfort levels for power-distribution robustness.
nah, in this particular case the goal was making it manufacturable and able to function at all. Human replaced those hilarious 2 mil traces with proper 15 mil ones. And you cant just click on a track and brrrrt it from 2 to 15mil as they themselves admit:
>Enlarging polygons often required freeing routing channels, which triggered additional micro-moves and refinements
Human EE had to go in, rip out more than half (the actually time consuming half) of the generated garbage and lay it out manually. Those "micro-moves" involved completely re-arranging layer stack moving whole swaths of signals to different layers, shuffling vias etc.
>Once via delays were included, several tuned routes no longer met their targets. The team re-balanced these nets manually.
"re-balanced" being colloquialism for ripped all the actually difficult to route parts and re-did manually.
AI didnt even try to length match flash. Just autorouted like you would 8MHz Arduino board.
ENET_TD2 - what the hell happened there? :D Signal is doing a loop/knot over itself while crossing 3 layers, Ben was probably too tired of AI shenanigans at this point and didnt catch it instead elongating ENET_TD1 to length match this lemon.
Comparing SOM AI output vs human "expert assistance" there is very little left from the AI. Almost every important track was touched/re-done from scratch by human hand. Ben (or another EE they didnt mention) did an amazing job salvaging this design into something actually working.
This is my impression after a quick glance. I didnt try looking for problems very hard, didnt look into component placement (caps, would required reading datasheets) or ran any FEM tools.
Aging, thermal cycling, signal emissions, signal corruption, reliability, testability, failure dynamics, and a hundred other manufacturing, maintenance, usability, and reliability profiles are subtly affected by placement and layout that one learns to intuit over the years.
I’m not saying that AI can’t capture that eventually, but I am saying that just following simple heuristics and ensuring DRC compliance only gets you 80 percent of the way there.
There is as much work in getting the next 15 percent as there was in the first 80, and often requires a clean slate if the subtleties weren’t properly anticipated in the first pass. The same stands for the next 4 percent. The last 1 percent is a unicorn. You’re always left with avoidable compromises.
For simple stuff where there is plenty of room, you can get great results with automation. For complex and dense elements, automation is very useful but is a tool wielded with caution in the context of a carefully considered strategy in emc, thermal, and signal integrity trade offs. When ther is strong cost pressure it adds a confounding element at every step as well.
In short, yes, it will boot. No, it will not be as performant when longevity, speed, cost, and reliability is exhaustively characterized. Eventually it may be possible to use AI to produce an equivalent product, but until we have an exhaustive training set of “golden boards” and their schematics to use as a training set, it will continue to require significant human intervention.
Unfortunately, well routed, complex boards are typically coveted and carefully guarded IP, and most of the the stuff that is significantly complex yet freely and openly available in the wild is still in the first 80percent, if even. The majority of circuit boards in the wild are either sub-optimally engineered or are under so much cost pressure that everything else is bent to fit that lens. Neither one of those categories make good training data, even if you could get the gerbers.
The only reason people usually route PCBs is that defining the constraints for an autorouter is generally more work than just manually routing a small PCB, but within semiconductors autorouting overtook manual routing decades ago.
i guess maybe there are less degrees of freedom and more 'regularity' in the semiconductor space? sort of like a fish swimming in an amorphous ocean vs. having to navigate uneven terrain with legs and feet. the fish in some sense is operating in a much more 'elegant' space, and that is reflected in the (beautiful?) simplicity of fish vs. all the weird 'nonlinear' appendages sticking out of terrestrial animals - the guys who walk are facing a more complicated problem space.
i guess with pcbs you have 'weird' or annoying constraints like package dimensions, via size, hole size, trace thickness, limited layer count, etc.
1. A schematic of a reference design with all components specified, and a library of components with correct footprints.
2. A block diagram with the major components, but nothing too specific. Free reign of Digikey.com.
3. "Computer, make me a linux board, and make it snappy!"
(I think 1 is closest)
Took me a few reads to realize this wasn’t some sort of Irish slang
Holy clickbait, batman! The hard parts were done for them! All the fast signals like DDR are on the SoM, designed by real humans who understand EE. To make it all even more of a lie, their design is basically a copy of the reference base-board for the SoM.
"Boots on first attempt" well, duh! the SoM is self-contained. It boots all by itself as is... so no wonder that it boots.
No EMC results either. Making things work is 10% of the work. Passing certs on unintended emissions and making it stable is the other 150%.
https://www.quilter.ai/blog/preparing-an-ai-designed-compute...
I could actually see myself using this tool, as someone who trained as an EE and still likes to tinker with electronics. It would be fun to just assemble a parts list and a rough layout and then receive a working electronic device a few weeks later with minimal work.