Winter Hackacthon 2019: Stock movement (& support) explorer

Here’s a common support scenario we get regularly in the format of a query from a user:

“Our warehouse was told to expect x number of products but they are reporting some are missing, they only have y.”

The first step into resolving the issue is (like any support operation) is understanding the question. The example above would be translated to:

“Our system is saying that Warehouse Z only receipted X amount of stock against order A which contained a total quantity of Y. Where’s Y-X stock gone?”

You have a system with several tables of data belonging to different systems but all serving a common service and all interacting with one another.

Blend in some users from across different teams, all doing different things, but all towards the common goal of advancing stock from one table or system to another.

You start looking into what could have caused such a situation; some systems have a UI, others require some database queries. If the answer isn’t immediately obvious, you’ll expand that query out or you might get another team involved (or both).

Why didn’t we ever develop a tool to analyse and report on all of this data and how it pertains to any particular order, product, pick, delivery, …etc?

The idea

A tool we desperately need! Give it an order reference or stock movement reference and it would provide a single view of all that data, where it is, and – most importantly – where it isn’t! This alone would save hours of developer time every month and might also (just) empower our users to research the issues themselves and address it.

The analysis

Stock movements come in many different forms: there’s the intake from suppliers to warehouses, stock transfers from site to site, & stock adjustments to name a few. After a morning of investigation (and time running short) we realised that only one area would be the target of our Hackathon project: Stock Intakes.

We ignored the processes which actually moved or generated data and instead just concentrated on our application “sucking in” the data which _did_ exist from across 4 different databases (consisting of MySQL, MS-SQL and MongoDB). In total Intakes alone resulted in 13 different tables of data.

Working groups

The stack used was to be dotnet core 3.1 (just released in the last couple of days) for the back-end and ReactJS for the front-end. There’s a solution template for just that within Visual Studio 2019 – although my favourite front-end language de jour (TypeScript) is not enabled and quite a lot of the npm packages were fantastically out of date! It did take a while to get TS configured, linting working with ESLint (NPM package hell, as usual).

I split the team of 4 into 2 groups: front-end and back-end teams. Back-end would create the API endpoints for each table of data. Front-end would call the API’s and would also be responsible for the logic of determining what was wrong was all that data was collated.

Naming things is hard

At one stage I realised that I was spending an inordinate amount of time trying to come up with a name for the main TypeScript object which held all the data and processed and validated it. I ended up calling it the BFO (with a h/t and a nod to the BFG). A public method of “Validate” seemed appropriately named.

Oddly, all the members of the team instinctively knew the purpose of BFO, not sure if that’s a reflection of my awesome naming techniques or of their possible gaming heritage??

The result

After 3 days and 1 team member off on a day’s holiday and 1 member off sick we presented what we had. It worked. For the example order we worked with, we were able to get the system to inform us that the warehouse had only receipted half of the given order. From that, we could immediately determine that that was the cause for the missing stock not showing within our Dockets system. And getting that data and conclusion was now reduced from a potential hour of investigation across two teams down to a mere 5 seconds or so.

The result (addendum)

Ok…The validation routine wasn’t too smart – it only reported discrepancies between the source data, it couldn’t actually tell us what it thought that problem probably was. But that’s just detail – with internal knowledge of the process it was still much easier for us (and possibly, our users) to determine the cause of an issue.

What I really wasn’t too happy with was the UI – it just simply dumped each table of data to a html table (nicely formatted with Bootstrap, of course) navigable via tabs with a summary tab which just dumped the resultant text generated by that one public Validate method within the BFO.

The best bit of it all though is that all the developers within two teams responsible for stock movements want the tool released post-haste, so that’s great!

Next steps: deploy to kubernetes & build a proper single page dashboard with key information pertaining to the order. Those will come as I spend some sneaky hours over the next few weeks on it 🙂

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s