3D printing could potentially transform the global manufacturing landscape. But for that to happen, the 3D print community must first solve a major data pipeline challenge: speeding the processing of complex designs into machine instructions for 3D printers.
New 3D printing methods, such as HP’s Multi Jet Fusion technology, let designers work with complex internal structures and meta-materials that are impossible to fabricate with traditional methods, notes Jun Zeng, a senior researcher in HP Labs’ fabrication technology group.
“But it takes a lot of information to describe not only the shape but also the interior composition of a complex part,” he explains. “Additionally, the printer needs to compute auxiliary data tailored to the printing physics to ensure the physical parts that are printed match the original design.”
New research conducted by Zeng and HP Labs colleagues points to a promising approach for managing these very complex files, work now manifested in a tool kit of experimental algorithms that is helping HP’s 3D Print business group ready the future generation of HP 3D printers.
Trillions of voxels
Complex objects can be presented by collection of voxels, or volumetric pixels. Each voxel can record the intended properties of the object at that specific point, such as variations in color, elasticity, strength, and even conductivity of the printed material, adding to the file’s size.
“Using voxels as data containers is not only intuitive but also very flexible,” notes Zeng. “But it also means that we have a lot of voxels that need to be dealt with.”
A colorful dragon designed with complex internal lattice structures shared by Zeng, for example, is just a few centimeters across when printed but described by a file structure with an addressability of 1 billion voxels.
HP 3D printers already have fabrication chambers larger than a cubic foot that can fabricate hundreds of parts in the same build, and at a finest resolution up to 1,200 dots per inch, each of which can be represented by a single voxel.
“Once designers start to exploit the full voxel addressability afforded by these types of printers,” Zeng suggests, “we will be working with files that need to address tens of billions, and even a trillion, voxels.”
Files of this size present two challenges in particular. Firstly, to be moved, stored and otherwise manipulated effectively, they need to be reduced in size. But at the same time, it must be possible to reach each voxel and its neighbors quickly in order to generate machine instructions fast enough to feed them to the printer without causing a bottleneck in the printing process.
Intended variations in an object’s properties – where it gradually gets softer, for example, or where it grows in flexibility – also impact the instructions that must be sent to the 3D print head for each specific voxel, further complicating the processing that must occur for the design to be printed as required.
“The big research challenge here comes down to how you structure the voxel data to enable both efficient compression and fast processing, which is also influenced by the computing architecture that you choose to do the voxel processing,” says Zeng.
New approaches, and a new toolkit
Zeng and colleagues at HP Labs believe one viable option lies in deploying new kinds of parallel processing using both basic computer chips (CPUs) and GPUs, computer chips initially developed for graphics processing. While CPUs are typically optimized for specific tasks to avoid latency, GPUs are optimized to take on multiple similar but separate tasks at once.
The HP Labs team have been working with academic and industry partners to explore using CPUs and GPUs as co-processors, including collaborating with chip maker NVIDIA.