Case Studies — Production Engagements
Every case study on this page is from a real production engagement. All clients are anonymised. The numbers are not approximations — they are what was measured before and after the work was done.
All client organisations are anonymised by default. Sector, geography, and platform are described where relevant to the case. Specific firm names, personnel, and identifying details are withheld. Outcome figures are drawn directly from pre- and post-engagement measurements, not estimates.
The situation. An overnight ETL pipeline at a Channel Islands financial services firm had been running progressively slower for several months. By the time Yitron was engaged, the job was consuming six hours of the overnight window — leaving no margin for reruns or delays before the business day started. Any failure meant either a delayed start or incomplete data for the morning reporting run.
What had already been tried. The internal team was experienced. Standard diagnostic steps had been followed: indexes rebuilt, statistics updated, query execution plans reviewed, tempdb configuration checked. None of it made a material difference. The job continued to slow. After several investigation cycles it had been marked as a known issue — performance degradation with no identified cause.
Why it was missed. Standard SQL Server performance diagnosis focuses on query execution plans, index coverage, and statistics. These are the right tools for query-level problems. This was not a query-level problem — it was a configuration-level problem affecting the I/O subsystem. Without looking at wait statistics at the right granularity, and without understanding what those wait types indicate about the storage layer, the root cause was not visible.
The fix. A single configuration change. The job that had been running for six hours completed in seven minutes on the first run after the fix was applied. It has run within its overnight window every night since.
What this case demonstrates
This problem was not in the code. Standard query-level tools produced correct findings — about the wrong layer. The defect was in server configuration, invisible to execution plan analysis.
SQL Server records what it is waiting for. Reading those signals at the right granularity is the diagnostic skill.
A problem that has defeated standard diagnostic approaches is not necessarily unsolvable. It may require a different level of diagnosis.
Three further cases from real engagements. Each one represents a class of problem that recurs across Jersey's financial services sector.
A financial services data warehouse with progressively slower reporting. Indexes added over time — each helping briefly, then performance degrading again. The underlying issue was the fact table design itself: a structure that made range queries across large date windows disproportionately expensive regardless of indexing strategy.
A dimensional model redesign prototype — not a full rebuild — demonstrated benchmark improvements an order of magnitude beyond the existing structure. The prototype provided the evidence base for a full model migration, de-risking the investment decision.
A manufacturing company running Microsoft Dynamics NAV across fifteen databases — most hosting multiple companies. The data was distributed across the world. The challenge was not schema mapping — it was extracting data from across the entire estate without impacting production systems, and making it available for live Power BI reporting.
A consolidation layer was designed and built across 6,000+ tables, pulling data from all databases and companies into a single reporting environment. Production systems were not impacted. Power BI connected directly to the consolidated layer — giving management a low-latency reporting, unified view across the whole business for the first time.
A trust and wealth management firm running TWMS (the predecessor to NavOne) with a KYC screening process that required matching client names against sanctions and PEP lists. The existing process ran sequentially against a large dataset — taking several hours per full screening run, which constrained how frequently screening could be performed.
Phonetic matching algorithms and SQL CLR extensions were implemented to handle name variations, transliterations, and partial matches — while dramatically reducing false positives. Full screening run time cut by 20×, enabling daily rather than weekly screening cycles.
A 30-minute call is enough to understand whether the problem has a tractable solution and what finding it would involve. No charge. No obligation beyond the conversation.
Book a Discovery Callenquiry@yitron.co.uk +44 7829 800454 Jersey, Channel Islands