If you’ve compared 3D printer spec sheets, you’ve likely noticed that “accuracy” means different things to different people. Technical standards and commercial language often get mixed together, creating claims that are hard to interpret.
According to ISO 5725, a proper specification includes both accuracy (closeness to the true value) and precision (consistency across repeated measurements). Without separating these, a claim like “±100 μm” reveals little about real-world performance.
What matters in practice is how a printer’s performance aligns with your part tolerances and inspection plan. If your design calls for ±0.2 mm on critical features, you need to know the system’s bias, variability (more commonly described as repeatability), and reproducibility.
Learning to decode accuracy specs gives you control. Instead of taking claims at face value, you can evaluate whether a machine is capable of consistently meeting your tolerances. This lets you make purchasing decisions based on evidence, rather than unsubstantiated claims.
Before you can compare printer specs with confidence, you need to know the vocabulary. The terms below have defined meanings in standards such as ISO 5725 and ISO 1101. Misusing them leads directly to misreading vendor claims.
How close the average measurement is to the true value. This reflects systematic error or bias: a machine with low bias has high accuracy.
How consistently results cluster. Precision is about scatter, not correctness. It includes both repeatability and reproducibility:
A property of your drawing or design, not of the machine. Tolerance is the allowed variation defined by standards such as ISO 1101 or ASME Y14.5. Parts have tolerances; machines do not.
The smallest increment a printer can command in motion or output (typically the XY minimum beam, bead, or pixel size, together with the Z layer height). Resolution does not guarantee dimensional accuracy.
The consistency of error across the build or measurement range. Without good linearity, a single “accuracy” number is meaningless. For example, these test stars are built in a range of locations across the print bed to check for linearity of the machine.
In short: equipment vendors may use these terms loosely, but if you want to evaluate their specs against your tolerances, you must use them rigorously. The rest of this guide builds on these definitions.
When a datasheet lists an accuracy value, it rarely tells the whole story. To make sense of these numbers, you need to understand how accuracy is actually determined in practice, and how rigorous vendors are in producing their claims.
3D printer accuracy assessments typically begin with standardized test parts defined in ISO/ASTM 52902. These artifacts include holes, bosses, thin walls, and overhangs that probe different failure modes. They act as a common yardstick to compare how printers handle geometry across the build volume.
Metrology doesn’t stop at printing an artifact. Following ISO 5725 methods, systems must be evaluated through repeated measurements under varied conditions to capture both accuracy (closeness to the nominal value) and precision (consistency across trials). This step separates unsubstantiated claims from statistically defensible results.
Measurements are then taken using traceable instruments, such as coordinate measuring machines (CMM), computed tomography (CT), or optical systems. Before reporting results, engineers calculate an uncertainty budget (a formal accounting of all error sources per NIST guidance) to quantify confidence in the data. NIST emphasizes that without this chain of traceability, accuracy claims cannot be meaningfully compared.
Some manufacturers go further by validating the reliability of their measurement process itself. At Stratasys, for example, we deployed MSA, a Six Sigma methodology, to quantify repeatability, reproducibility, and stability across multiple sites. This ensures that published accuracy specs are not only precise but also consistent across operators and conditions.
This framework addresses three critical dimensions:
We also invest in people. Engineers and application specialists across the US, UK, and Europe have completed dedicated MSA training, building the expertise needed to apply these methods consistently across product lines and regions.
Knowing how accuracy is measured is what allows you to cut through “±100 μm” unsubstantiated claims. A single number means little unless you know the test part, method, measurement system, and uncertainty behind it. When equipment vendors use standardized artifacts, rigorous metrology, and system verification, their accuracy specs become trustworthy benchmarks rather than aspirational promises.
When manufacturers present accuracy specs, the format is almost as important as the numbers. Specs often appear in three main forms:
Phrases like “±100 μm accuracy” or “25 μm resolution” typically represent a best-case snapshot under specific, often undisclosed conditions. They rarely include context such as environment, sample size, or post-processing. Unless you know what artifact was measured, under what conditions, and how many samples, a single figure is little more than a headline.
Graphs showing error versus size, build height, or location convey much more than one number. A slope indicates linearity, band thickness shows precision, and the offset from zero highlights bias. The presence (or absence) of confidence bands and sample counts tells you how trustworthy the curve really is.
For example, a graph might show a nearly flat slope with only minimal change in error as feature size increases. A +40-micron offset combined with a ±60-micron band would then indicate a small positive bias and a moderate, well-bounded level of precision.
When an equipment vendor shares raw inspection data, you gain the ability to calculate bias, standard deviation, outlier rate, and error correlations yourself. This is the gold standard because it lets you directly simulate whether the printer can hold your drawing tolerances across the build volume.
With those formats in mind, let’s evaluate each specification in turn, starting with resolution.
Resolution specs often appear prominently on a 3D printer datasheet, but they are easily misinterpreted. Vendors highlight them because they’re simple to state, yet resolution is not the same as accuracy.
The key distinction is between nominal resolution (the commanded increment) and effective resolution (the smallest repeatable feature after printing and post-processing). A small nominal number may look impressive on paper but doesn’t necessarily translate into dimensional reliability.
Takeaway: Resolution specs describe potential detail, not guaranteed dimensional accuracy. Always look for supporting data on accuracy, precision, and overall dimensional performance before assuming a fine resolution number means better parts.
Spec sheets often use superlatives like high precision, ultra-accurate, or 25 µm resolution. But without data tied to standards, these phrases have no engineering meaning. To evaluate a printer’s claims, translate the language into measurable quantities and compare them to your part tolerances.
Start from your drawing tolerances, not the vendor brochure. Identify the dimensions and variation you can accept (for example, ±0.2 mm). Then ask:
Example: Your drawing allows ±0.2 mm. A vendor claims “±100 µm accuracy.” On its own, this says nothing about variation. If the bias is 0.05 mm but the spread is ±0.15 mm, many parts will exceed your tolerance. By contrast, a dataset showing +0.05 mm bias with a ±0.05 mm spread demonstrates capability with margin.
Marketing terms only matter once they’re translated into accuracy and precision – and then tested against your own tolerances. This translation is what turns vendor claims from slogans into evidence for go/no-go decisions.
When reviewing a 3D printer spec sheet, what’s missing is often as important as what’s included. Use the following framework to separate trustworthy data from marketing gloss.
Complete documentation of the test method is essential. Vendors who follow good metrology practice keep full records of artifacts, build conditions, and measurement procedures. This level of transparency allows you to repeat their tests and verify results, and it also prevents misinterpretation that can arise from selective sampling or undocumented post-processing.
Specs that withstand these questions and avoid the above red flags are far more likely to reflect real capability. Anything less should be treated as marketing shorthand, not a reliable predictor of part quality.
To make the translation from vendor claims to engineering choices concrete, consider how three common spec formats play out when matched against a simple case: your drawing requires ±0.2 mm tolerance on key dimensions.
A vendor may advertise “±100 μm accuracy," but without standard deviation of precision it is impossible to interpret. In these two examples, bias or accuracy is only one component. Knowing the precision lets you fully interpret the probability of meeting the specification. In one case the probability of a defect is very low, in the other it is around 30%:
Another vendor provides a plot of absolute error versus nominal size. Here you can examine slope (linearity), offset (bias), and band thickness (precision). Suppose the plot shows near-zero slope, a +50 μm offset, and a ±75 μm band. You can predict that for your 20 mm feature, the expected mean error is +0.05 mm with 95% of results within ±0.075 mm. This leaves comfortable margin within your ±0.2 mm requirement, making the data interpretable and useful.
Best practice is when the vendor supplies raw inspection results across the build volume. With this dataset, you compute both bias and σ by quadrant of the XY plane and by Z tier. For example, if bias ranges from –0.03 to +0.07 mm and σ remains under 0.05 mm across all sectors, capability simulations confirm that your five most critical features consistently fall inside tolerance with high confidence. Full datasets are rarely published on datasheets, but many vendors will provide them when asked, and a willingness to share this level of detail is itself a useful indicator of capability maturity. This dataset enables not just acceptance but also risk quantification and process monitoring.
Single values leave gaps, graphs provide partial context, and full datasets allow rigorous capability analysis. When converting specs into decisions, always anchor the evaluation in your drawing tolerances and insist on both accuracy and precision data. This approach ensures that vendor claims translate into defensible, evidence-based acceptance or rejection.
A printer’s performance is not defined by one successful demo build. What matters is whether accuracy holds up over weeks and months, across operators, sites, and materials. ISO 5725 calls this reproducibility: the long-term consistency of results under varying conditions.
The best way to track reproducibility is through a control plan:
This approach reveals drift, highlights when recalibration or maintenance is needed, and provides real evidence of stability.
When evaluating vendors, ask how they monitor accuracy over time. Do they run recurring artifact builds? Do they track performance across different sites and operators? Vendors who can demonstrate a reproducibility plan offer stronger assurance than those relying on one-off numbers.
When comparing systems, evaluate the quality of the vendor’s operating manuals and maintenance guidance because detailed documentation reduces operator driven variability. Vendors who publish clear best practice procedures for setup, calibration, and upkeep provide stronger assurance that reproducibility can be maintained over time.
Before trusting a spec sheet, run it through this quick filter:
Accuracy numbers without method, sample size, and test conditions provide little insight. Meaningful specifications separate accuracy from precision and show how results were measured, analyzed, and validated. Vendors who serve manufacturing customers publish statistical data, not single headline values, because capability must be demonstrated rather than assumed. When reviewing a datasheet, apply a critical eye and look for the metrology details that connect the claim to real performance.
For questions about how these principles apply to Stratasys systems, you can schedule a call and talk to a Stratasys expert.