MES-1386 rounding error
I created the PO using the XML from Matt but still haven’t replicated the error.
I noticed it's 216.818 here, in the ticket, it's 219.77.
Suggestions from Taylor: So I think the error is when the reported value is either over/under a certain % of the machine's maximum over consumption %.
So, you report 500 qty and have a lot of 490 but the machines overconsumption is 10% (i don't know the real values here so just examples)
The original Hitachi database from Jon: https://cimteq365-my.sharepoint.com/:u:/g/personal/jonathan_clapham_cimteq_com/EbeSHbRP9CJLiwf0E_V3hyYBHpdm9qoZnrLhYWmiOgUaQg
The original po XML from Matt: https://cimteq.atlassian.net/browse/MES-1386
Database MES_Hitachi_round.7z: The Hitachi MES database in my testing that updated from Jon’s, in the first cell ‘cell Tower’, machine 2001, it contains the PO ‘MB0504jiyu’ I created.
Database cb_hitachiRounding.7z: a cable Builder database with all necessary designs I created.(not Hitachi their own cable Builder database, I haven’t got one)
The XML in my testing is here, I changed the design version and alternative to 0 based on XML from Matt, so it’s easier to create those designs.
Findings by Jon
Having looked deep into the code I have been able to reproduce an issue that at least looks to be the same based on the error message -
In this case, the issue is caused by having different ‘quantity per unit’ values on job operations that are joined to the same machine run bom.
When in the situation the actually quantity per unit that is used is completely random.
If you opening and closing the report progress multiple times, you will actually see the qpu changes!
Solution
Once the parallel consumptions branch is merge into to trunk, there should be no reason to have multiple boms of the same type (item and operation) with different quantities per unit. Any that do should be given different position keys, resulting in them not being join by a machine run bom.
Additional checks need to be added to ensure that boms of the same type DO NOT have different qpu’s when processing the create po. If they do, processing should fail and an appropriate error should be shown in the integration log.
We’ll also need to perform the same checks when merging machine runs.
I’ve created two new tickets relating to this -
https://cimteq.atlassian.net/browse/MES-1892
https://cimteq.atlassian.net/browse/MES-1893
Note: From the supplied create po xml, there do not appear to be boms of the same type with different qup’s so I’ll continue to investigate.
The Provided PO is NOT the PO that created the machine run in the screenshots -
The po xml cannot be the correct one. Hitachi report drum lengths in meters. The quantity per unit for the problem bom is 0.0818181.
If we use this to divide the reported required quantity we get the a length of 2686.141575030463919.
219.775 / 0.0818181 = 2686.141575030463919
Rounding this up or down results in a consumption of 219.67634166 or 219.8452347.
2686 * 0.0818181 = 219.67634166
2687 * 0.0818181 = 219.8452347
Neither of which would round to the 219.77 or 219.78 in the screenshot.
There are however a number of boms in the Hitachi dev db that match the item (1019037422539) and have qpu that would round to match the 219.775 mentioned in Juan's email.
The qpu for these is 0.0826532 and the length that creates the consumption of 219.775 for item 1019037422539 is 2659 meters.
Here’s a list of the machine runs that have these in -
53034
53211
53211
53352
I’ve re-opened jobs from these machine runs and reported progress on drums with lengths of 2659 but did not get the error.
After spending much of this day on this, I’m concluding that this issue has either been fixed in the last year or caused by having multiple qpu’s as mentioned above. It cannot be replicated with the info we have to hand.
I did it!!! -
All you need is multiple lots allocated to the bom line. Then the 3 decimal place consumption from the frontend will be used i.e. 219.775. This rounded to 2 decimal places will equal 219.78. The required consumption was actually 219.774 which will round down to 219.77.
Issue Found!!!!!
What a nightmare to find! The issue turned out to be related to the want uom is looked up in the mesStandardNumberFormatter
function in common.js. All the uom’s in Hitachi’s uom database table are in upper case. The mesStandardNumberFormatter
function converses the uom that is passed in to lower case. If then uses this lower case uom as a key in the uom map that’s come from the database. The problem is that these are all still upper case, therefore it’s not found.
The mesStandardNumberFormatter
then looks at it’s default list of uom. ‘lb' is not in there so it calls back to using 3 decimal places.
The raw value for our consumption was 219.7748 which results in a value of 219.775 to 3 dp.
If there are more than one lot allocated to our bom line, the value 219.775 is sent from the frontend to the backend when decided if our consumptions are ok in the method doWeHaveEnoughForConsumption
. This method works out what the consumption should be from the qpu and length which results in 219.7748. However, the backend correctly finds the uom and the related decimal places to round to. In this case it’s 2 dp.
219.7748 to 2 dp is 219.77 so this is what is used for the required quantity.
If then goes on to round the input quantity from the frontend (219.775) to 2 dp which results in a value of 219.78.
As the required quantity (219.77) and the reported quantity (219.78) do not match, and manual consumption is disabled, the error is thrown!