USA & Canada
USA & Canada
Blog

How to Read 3D Printer Accuracy Specs Like a Pro


Effy Shafner

Effy Shafner

Technical Content Writer

Stratasys Technology Comparison Guide

Find the best 3D printing solution for your application, operations and business needs.

Why Accuracy Specs Seem Confusing and Why They Actually Matter

If you’ve compared 3D printer spec sheets, you’ve likely noticed that “accuracy” means different things to different people. Technical standards and commercial language often get mixed together, creating claims that are hard to interpret. 

According to ISO 5725, a proper specification includes both accuracy (closeness to the true value) and precision (consistency across repeated measurements). Without separating these, a claim like “±100 μm” reveals little about real-world performance. 

What matters in practice is how a printer’s performance aligns with your part tolerances and inspection plan. If your design calls for ±0.2 mm on critical features, you need to know the system’s bias, variability (more commonly described as repeatability), and reproducibility. 

Learning to decode accuracy specs gives you control. Instead of taking claims at face value, you can evaluate whether a machine is capable of consistently meeting your tolerances. This lets you make purchasing decisions based on evidence, rather than unsubstantiated claims. 

Key Terminology (Accuracy, Precision, Tolerance, Resolution, Linearity)

Before you can compare printer specs with confidence, you need to know the vocabulary. The terms below have defined meanings in standards such as ISO 5725 and ISO 1101. Misusing them leads directly to misreading vendor claims.

accuracy precision graph

Accuracy

How close the average measurement is to the true value. This reflects systematic error or bias: a machine with low bias has high accuracy. 

Precision 

How consistently results cluster. Precision is about scatter, not correctness. It includes both repeatability and reproducibility:  

  • Repeatability: Agreement under the same conditions in the short term. 
  • Reproducibility: Agreement across different operators, machines, or environments in the long term. 


T
olerance 

A property of your drawing or design, not of the machine. Tolerance is the allowed variation defined by standards such as ISO 1101 or ASME Y14.5. Parts have tolerances; machines do not.  

Resolution  

The smallest increment a printer can command in motion or output (typically the XY minimum beam, bead, or pixel size, together with the Z layer height). Resolution does not guarantee dimensional accuracy. 

Linearity  

The consistency of error across the build or measurement range. Without good linearity, a single “accuracy” number is meaningless. For example, these test stars are built in a range of locations across the print bed to check for linearity of the machine. 

In short: equipment vendors may use these terms loosely, but if you want to evaluate their specs against your tolerances, you must use them rigorously. The rest of this guide builds on these definitions. 

figure 4 2

How Accuracy Is Really Measured in 3D Printing

When a datasheet lists an accuracy value, it rarely tells the whole story. To make sense of these numbers, you need to understand how accuracy is actually determined in practice, and how rigorous vendors are in producing their claims. 

Test Artifacts

3D printer accuracy assessments typically begin with standardized test parts defined in ISO/ASTM 52902. These artifacts include holes, bosses, thin walls, and overhangs that probe different failure modes. They act as a common yardstick to compare how printers handle geometry across the build volume.  

x y z plane graphic

Measurement Studies 

Metrology doesn’t stop at printing an artifact. Following ISO 5725 methods, systems must be evaluated through repeated measurements under varied conditions to capture both accuracy (closeness to the nominal value) and precision (consistency across trials). This step separates unsubstantiated claims from statistically defensible results. 

The Metrology Chain 

Measurements are then taken using traceable instruments, such as coordinate measuring machines (CMM), computed tomography (CT), or optical systems. Before reporting results, engineers calculate an uncertainty budget (a formal accounting of all error sources per NIST guidance) to quantify confidence in the data. NIST emphasizes that without this chain of traceability, accuracy claims cannot be meaningfully compared. 

Measurement Systems Analysis (MSA) 

Some manufacturers go further by validating the reliability of their measurement process itself. At Stratasys, for example, we deployed MSA, a Six Sigma methodology, to quantify repeatability, reproducibility, and stability across multiple sites. This ensures that published accuracy specs are not only precise but also consistent across operators and conditions. 

This framework addresses three critical dimensions: 

  • Accuracy – closeness of results to the true or reference value 
  • Precision – repeatability (same operator, same part) and reproducibility (different operators, same part) 
  • Stability – consistency of results over time 

We also invest in people. Engineers and application specialists across the US, UK, and Europe have completed dedicated MSA training, building the expertise needed to apply these methods consistently across product lines and regions. 

Why This Matters 

Knowing how accuracy is measured is what allows you to cut through “±100 μm” unsubstantiated claims. A single number means little unless you know the test part, method, measurement system, and uncertainty behind it. When equipment vendors use standardized artifacts, rigorous metrology, and system verification, their accuracy specs become trustworthy benchmarks rather than aspirational promises. 

Making Sense of Datasheet Formats

When manufacturers present accuracy specs, the format is almost as important as the numbers. Specs often appear in three main forms: 

The Single-Number Claim

Phrases like “±100 μm accuracy” or “25 μm resolution” typically represent a best-case snapshot under specific, often undisclosed conditions. They rarely include context such as environment, sample size, or post-processing. Unless you know what artifact was measured, under what conditions, and how many samples, a single figure is little more than a headline.

The Plotted Graph

Graphs showing error versus size, build height, or location convey much more than one number. A slope indicates linearity, band thickness shows precision, and the offset from zero highlights bias. The presence (or absence) of confidence bands and sample counts tells you how trustworthy the curve really is. 

For example, a graph might show a nearly flat slope with only minimal change in error as feature size increases. A +40-micron offset combined with a ±60-micron band would then indicate a small positive bias and a moderate, well-bounded level of precision.

The Full Dataset  

When an equipment vendor shares raw inspection data, you gain the ability to calculate bias, standard deviation, outlier rate, and error correlations yourself. This is the gold standard because it lets you directly simulate whether the printer can hold your drawing tolerances across the build volume.  

With those formats in mind, let’s evaluate each specification in turn, starting with resolution. 

Resolution Claims: What They Really Tell You 

Resolution specs often appear prominently on a 3D printer datasheet, but they are easily misinterpreted. Vendors highlight them because they’re simple to state, yet resolution is not the same as accuracy. 

  • Z resolution (layer height) influences surface finish and visible layer lines, but smooth surfaces don’t guarantee correct dimensions. 
  • XY resolution reflects optics, pixel pitch, laser size, or nozzle diameter. The nominal step size rarely equals the smallest stable feature, since curing, melt pool behavior, bead width, and shrinkage affect the result. 

The key distinction is between nominal resolution (the commanded increment) and effective resolution (the smallest repeatable feature after printing and post-processing). A small nominal number may look impressive on paper but doesn’t necessarily translate into dimensional reliability. 

Checklist: How to Decode Resolution Specs 

  • Does the vendor separate XY resolution from Z resolution (layer height)? 
  • Do they provide dimensional accuracy data for features at least 10x the stated resolution? For example, if a printer advertises a 25 µm XY resolution, look for accuracy data on features around 250 µm or larger, since accuracy near the nominal resolution is not metrologically meaningful. 
  • Are resolution numbers tied to machine settings (optics, nozzle, pixel pitch) or to measured part performance? 
  • Is there evidence of effective resolution after post-processing, not just nominal step size? 
  • Are surface finish claims clearly distinguished from dimensional accuracy? 

Takeaway: Resolution specs describe potential detail, not guaranteed dimensional accuracy. Always look for supporting data on accuracy, precision, and overall dimensional performance before assuming a fine resolution number means better parts. 

normal resolution vs effective

From Marketing Language to Measurable Capability

Spec sheets often use superlatives like high precision, ultra-accurate, or 25 µm resolution. But without data tied to standards, these phrases have no engineering meaning. To evaluate a printer’s claims, translate the language into measurable quantities and compare them to your part tolerances.

Decoding Common Phrases

  • High precision: should be backed by repeatability and reproducibility data, including standard deviation and number of samples.
  • High accuracy: low bias from the true value, ideally including an uncertainty budget.
  • “25 µm resolution”: should specify XY minimum feature size and Z layer height, along with accuracy data for features at least 10x larger.

Your Minimal Statistics Toolkit

  • Bias (accuracy error): means the difference between measured and nominal (desired) values. Shows systematic oversizing or undersizing.
  • Precision (repeatability/reproducibility): scatter of results across repeated builds. Narrow spread = consistent results.
  • Uncertainty: combined measure of bias and precision, usually reported with 95% confidence. This is the bridge between vendor specs and your tolerance assessment.

Applying It to Your Design

Start from your drawing tolerances, not the vendor brochure. Identify the dimensions and variation you can accept (for example, ±0.2 mm). Then ask:

  • Does reported bias fit within half the tolerance band?
  • Is the process spread (e.g., 3σ) small enough to keep parts inside tolerance?
  • Are results consistent across build volume and conditions?

Example: Your drawing allows ±0.2 mm. A vendor claims “±100 µm accuracy.” On its own, this says nothing about variation. If the bias is 0.05 mm but the spread is ±0.15 mm, many parts will exceed your tolerance. By contrast, a dataset showing +0.05 mm bias with a ±0.05 mm spread demonstrates capability with margin.

Takeaway

Marketing terms only matter once they’re translated into accuracy and precision – and then tested against your own tolerances. This translation is what turns vendor claims from slogans into evidence for go/no-go decisions.

Spotting Red Flags and Asking the Right Questions

When reviewing a 3D printer spec sheet, what’s missing is often as important as what’s included. Use the following framework to separate trustworthy data from marketing gloss.

Red Flags to Watch For

  • Single-number accuracy claims without artifact type, feature size range, or sample size. Accuracy is not a single knob—it requires context.
  • Resolution highlighted as proof of accuracy. Layer thickness or pixel pitch says little about dimensional fidelity unless backed by measurement data.
  • Graphs with missing context. Plots that lack labeled axes, confidence intervals, or outlier visibility may look rigorous while concealing variability.
  • Incomplete datasets. If build coordinates, environmental conditions, material lot details, or the post-processing workflow are absent, reproducibility and real-world transferability cannot be assessed.

Questions to Pin Vendors Down

  • Artifact and standard: Which geometry was used, and does it align with ISO/ASTM 52902 or equivalent?
  • Sample size and coverage: How many builds, how many parts, and which regions of the build volume were tested?
  • Measurement method and traceability: Was inspection performed with CMM, CT, or optical systems, and is calibration documented?
  • Process conditions: Which material lot, scan strategy, or slicer version was used, and what was the complete post-processing route (cleaning chemistry or method, cure time and temperature, support removal approach, secondary machining or finishing)? Were environment and post-processing controlled and documented in the same way as the build itself?
  • Dataset transparency: Can the vendor provide the full dataset—including raw CAD, inspection plan, and per-feature results—rather than summaries?
  • Test protocol completeness: Can they provide the full protocol required to reproduce their results, including artifact CAD, build setup, material and process parameters, environmental conditions, post processing steps, measurement workflow, equipment settings, and uncertainty method?

Complete documentation of the test method is essential. Vendors who follow good metrology practice keep full records of artifacts, build conditions, and measurement procedures. This level of transparency allows you to repeat their tests and verify results, and it also prevents misinterpretation that can arise from selective sampling or undocumented post-processing.

Specs that withstand these questions and avoid the above red flags are far more likely to reflect real capability. Anything less should be treated as marketing shorthand, not a reliable predictor of part quality.

From Spec Sheet to Decision: Full Example

To make the translation from vendor claims to engineering choices concrete, consider how three common spec formats play out when matched against a simple case: your drawing requires ±0.2 mm tolerance on key dimensions.

Case 1: The Single Number

A vendor may advertise “±100 μm accuracy," but without standard deviation of precision it is impossible to interpret. In these two examples, bias or accuracy is only one component. Knowing the precision lets you fully interpret the probability of meeting the specification. In one case the probability of a defect is very low, in the other it is around 30%:

70percent spec vs 99percent spec

Case 2: The Graph

Another vendor provides a plot of absolute error versus nominal size. Here you can examine slope (linearity), offset (bias), and band thickness (precision). Suppose the plot shows near-zero slope, a +50 μm offset, and a ±75 μm band. You can predict that for your 20 mm feature, the expected mean error is +0.05 mm with 95% of results within ±0.075 mm. This leaves comfortable margin within your ±0.2 mm requirement, making the data interpretable and useful.

Case 3: The Full Dataset

Best practice is when the vendor supplies raw inspection results across the build volume. With this dataset, you compute both bias and σ by quadrant of the XY plane and by Z tier. For example, if bias ranges from –0.03 to +0.07 mm and σ remains under 0.05 mm across all sectors, capability simulations confirm that your five most critical features consistently fall inside tolerance with high confidence. Full datasets are rarely published on datasheets, but many vendors will provide them when asked, and a willingness to share this level of detail is itself a useful indicator of capability maturity. This dataset enables not just acceptance but also risk quantification and process monitoring.

The Takeaway

Single values leave gaps, graphs provide partial context, and full datasets allow rigorous capability analysis. When converting specs into decisions, always anchor the evaluation in your drawing tolerances and insist on both accuracy and precision data. This approach ensures that vendor claims translate into defensible, evidence-based acceptance or rejection.

Monitoring Accuracy Over Time: Reproducibility

A printer’s performance is not defined by one successful demo build. What matters is whether accuracy holds up over weeks and months, across operators, sites, and materials. ISO 5725 calls this reproducibility: the long-term consistency of results under varying conditions.

The best way to track reproducibility is through a control plan:

  • Print standardized artifacts at set intervals.
  • Measure critical features with traceable instruments.
  • Chart results using statistical process control (SPC).

This approach reveals drift, highlights when recalibration or maintenance is needed, and provides real evidence of stability.

When evaluating vendors, ask how they monitor accuracy over time. Do they run recurring artifact builds? Do they track performance across different sites and operators? Vendors who can demonstrate a reproducibility plan offer stronger assurance than those relying on one-off numbers.

When comparing systems, evaluate the quality of the vendor’s operating manuals and maintenance guidance because detailed documentation reduces operator driven variability. Vendors who publish clear best practice procedures for setup, calibration, and upkeep provide stronger assurance that reproducibility can be maintained over time.

Summary: Checklist for Evaluating 3D Printer Accuracy Specs

Before trusting a spec sheet, run it through this quick filter:

Terminology:

  • Are accuracy, precision, repeatability, and reproducibility used in a manner consistent with the definitions in ISO 5725 and ISO/ASTM 52900?

Test Method: 

  • Was a standardized artifact (ISO/ASTM 52902 or equivalent) used?
  • Is the vendor transparent about the exact workflow used to run the test so that the method can be reproduced?
  • Is the actual artifact geometry disclosed so you can verify feature types and dimensions?
  • Does the vendor show where the artifact was printed within the build volume to confirm spatial coverage?

Data Transparency

  • Are accuracy (bias) and precision (spread) reported separately?
  • Is sample size (n) and confidence level provided?
  • Is data broken down across the build volume, not just in one spot?
  • Is a raw dataset or feature-level statistics available?

Resolution vs Accuracy

  • Are XY feature size and Z layer height stated clearly as resolution values, and not conflated with accuracy?

Practical Relevance

  • Can you link the reported measurements to specific build conditions, material settings, feature sizes, and tolerance requirements that are comparable to your own parts?
  • Is the vendor willing to explain their test methods, assumptions, and datasheet details so you can verify how the specifications were produced?

Conclusion 

Accuracy numbers without method, sample size, and test conditions provide little insight. Meaningful specifications separate accuracy from precision and show how results were measured, analyzed, and validated. Vendors who serve manufacturing customers publish statistical data, not single headline values, because capability must be demonstrated rather than assumed. When reviewing a datasheet, apply a critical eye and look for the metrology details that connect the claim to real performance.

For questions about how these principles apply to Stratasys systems, you can schedule a call and talk to a Stratasys expert.