Server Log Help Needed

Continuing from here in this thread:

I have started on this thing, and it is a bear.

Anyway, if y’all have ever taken a look at the Server Log, it is something else.

I don’t have a lot of data to go on, but I have begun parsing this out into a dashboard format.

So far I have identified these (Top Level) fields:

  • ServerLog
  • DatabaseNotification
  • EcfChangeNotification
  • GlobalLicensing
  • RESTApi
  • Op

And these fields & attributes of “Op” (Where the good stuff is.)

  • Sql
  • License
  • RESTApi
  • BOReader
  • BpmCustomization
  • Exception
  • BAQ

What I need help with is identifying any fields in the Top Level Node, or more importantly the “Op
node, (For anything I don’t have listed,) with associated sample data, so I can write parsers for those.

I have parsers written for all the ones listed already.

Thanks!

@gpayne @MikeGross , y’all are gonna owe me a beer.

1 Like

Ah, a new fan of structured logging! It is a part of the DevOps Idea - see comments.

1 Like

I have been checking the serverlogs daily for years, even in the E9 days but just with a simple log reader.

You will want to get BAQStatement

1 Like

What is it ? How do I make it happen?

Or if you or someone else has a sample, I can work with that.

Does anyone know if the “correlationId” in “Op” is unique to each “Op” section?

Sample sent. If you need more to play with I generate 300MB of logs a day.

Those look to be unique guids.

Add IceAppServer

Yes, it is what you get on client with error. You can use it to find more details in the server log.

2 Likes

Ok, after throwing a little bit of time into this, and playing with some logs that are not my own,
some issues have become more clear.

While this can be parsed out pretty efficiently into a flat database structure, it is a bit of a mess.

If you are passing this data back to a client, especially from the cloud, it’s a LOT of data.

So, now, I’m stepping back a little, because I want some ideas on how this maybe should work.

I have a prebuilt where clause parser I could shoehorn in here to limit data return, so that’s available.

But anyway, step outside of the box and put your thinking caps on, and give me some ideas.

Should this be completely flat? Should you only get one type of row or subrows at a time etc?

Should it write out to a UD table for easy (temporary?) querying, or read on the fly every query?

So many questions.

C’mon @Mark_Wonsil , I know you have something.

I do. But do folks want to hear more about cloud? :thinking:

Let me find a hybrid thingy…

I do, I do!

I’m really mostly interested in this particular thing, but I’ll always listen to you ramble on. I can always choose to ignore you :rofl:

Well, in the old days. We did logging by hand.

The Simpsons GIF by FOX TV

And we had to manually roll our logs.

Vintage Lumberjack GIF by US National Archives

But Serialog…

OK, I’m done. Not for long, so put on some Loggins and Messena

Logging is one of the three observability tools: metrics, logs, and traces.

  • Use metrics to track the occurrence of an event, counting of items, the time taken to perform an action or to report the current value of a resource (CPU, memory, etc.)
  • Use logs to track detailed information about an event also monitored by a metric, particularly errors, warnings or other exceptional situations.
  • A trace provides visibility into how a request is processed across multiple services in a microservices environment. Every trace needs to have a unique identifier associated with it.

From a DevOps perspective, we monitor to ensure software quality. When we make changes to the software, did we reduce errors and/or make it more performant? Creating metrics, logs, and traces is easy. Creating ACTIONABLE metrics, logs, and traces requires some planning - which requires thinking about automation. It’s very inefficient to have humans process logs. We don’t have time and will only do so during a post-mortem. When using observability tools, we can be proactive and have the system notify or even react to observability data:

  • How many errors in MRP? More or fewer than the last run? How long did it run? How does that compare to the last ten runs?
  • How many overall errors after a patch installation? Speed improvement or regression?
  • What’s the current Session Count?
  • How long has that job had no activity?
  • What was the CPU, Memory, networking, and Disc utilization during events?
  • How long since that last successful SQL backup?

All this is dumped into a system that can then alert or even perform actions like:

  • reboot a VM
  • restart a service
  • kill a container that’s unresponsive
  • add more containers (scale out)
  • remove containers (scale in)
  • add an Issue to a GitHub repository
  • message an Admin
  • send shocks to the developer’s collar

There are many tools, mostly cloud (see below), but might as well mention Azure Monitor, which works for both on-prem and cloud workloads. Click the link above to learn more.

Other Tools

1 Like

Thanks, now it’s stuck in my head…

You’re alright.