A long time ago you would fined electronic equipment with numerous adjustments to allow for components' initial tolerance and ageing effects. In those cases there would be an adjustment/calibration procedure that went step by step through things like "measure TP1 voltage and adjust RV2 for 5.000V +/- 0.002V" and so on. Here the resulting measurements would change as a result of calibration as errors were reduced through the procedure.
These days it is practically unheard of to actually adjust equipment outside of rather specialised laboratory-grade things. So the calibration procedure is one of checking the instrument against known high-accuracy reference standards of voltage, resistance, etc. and verifying that the meter reading are within its specified accuracy.
So calibration is (in most cases) not about higher accuracy, but about confidence. It is really to make sure that when you measure something the meter reading is not a lie, and that if you know your instrument is, say, 1% tolerance then your measured 230V is within +/-2.3V of that.
It is very important to have some means of checking your instruments are working correctly because the safety of your work depends in many cases on correctly verifying it after installation or when a fault is reported. Equipment can fail for all sorts of reasons, randomly from part failures, or as a result of mechanical or electrical abuse.
For basic measurements you can do this by comparison with another instrument or by using a calibration standard (e.g. the 'calcard' for low R and high R ranges), but for something like RCD trip-times or PFC/Zs measurements you need a bit more of a test set-up to do so.
But in most cases if you have a requirement to demonstrate that your measurements are reliable it is far easier to pay for a service that looks after the traceability of the test set-up and provides a recognised report you can file away until the next year or when any dispute arises.