Has anyone noticed that when you update a character field using REST, with a value that happens to be a valid ISO date string, Epicor will magically convert it to a US date, and drop the milliseconds part?
It makes sense that precision is lost when updating a date field, because the value has to be stored as such in the SQL database. But why character fields? They can and should be stored as-is.
What’s worse is that in this example, creating a second UD record with an ISO date that differs in milliseconds only, will cause a duplicate record error.
Ok I’ve done some testing so we can report this properly.
I added some text both before, and after (separately), and this does not occur.
Somewhere, either in the serializer or Epicor’s REST code, it must be recognizing that, when passed alone by itself, it is a date, and transforming as such. It doesn’t seem to care that it’s a string field, or that it’s wrapped in quotes.
There is no DATE type in JSON. Dates need to be passed as strings, so must always be wrapped in quotes. It is up to the backend to interpret whether the string is representing a date or not. I suppose there is some automagic here going wrong.
Another weird thing is that the format after conversion is technically neither a US date nor a SQL date.
A US date would use a 12-hour clock and AM/PM.
The native SQL datetime format is “YYYY-MM-DD hh:mm:ss”.
The Epicor database hardly has any datetime field. Everything is split into separate date and time fields. Time fields are integers or decimals that sometime represent seconds, sometimes hours. A legacy caused by the Epicor database being designed in the very old days, in a version of Progress that did not have a datetime type.