Improving Concurrency: Data Modeling with Vertical Partitioning
- Eran Golan
- 2 days ago
- 3 min read
Reducing locking, blocking, and deadlocks in wide tables by distributing updates across one-to-one related tables

Not long ago, I worked with a customer who was experiencing persistent blocking and occasional deadlocks in one of their core systems. The application itself wasn’t new, but over the years it had grown significantly. New features had been added, more processes were interacting with the database, and naturally the schema had evolved along the way.
One table in particular stood out. It had gradually grown to contain well over a hundred columns. Originally it had been designed to represent a single business entity in one place, which made the model easy to understand and query. But as more attributes were added over time, the table became increasingly wide.
At first glance, nothing seemed unusual. The indexes were maintained, the hardware was sufficient, and the queries themselves were not especially complex. Yet under load, the system began showing clear signs of contention. Users occasionally experienced slow responses, monitoring tools showed sessions waiting on locks, and from time to time deadlocks would appear in the logs.
After reviewing the workload patterns, an interesting detail became clear. Different processes were updating different columns of the same table. For example, one process might update status-related fields, while another updated metadata or configuration attributes. From a logical perspective, these operations were unrelated. But physically they all targeted the same row.
In relational database systems, when a row is updated, the database engine typically locks that row to maintain transactional consistency. Even if a transaction modifies only a few columns, the entire row is still locked. This meant that two processes updating completely different attributes were still competing for the same row-level lock.
In practice, this created unnecessary contention. Transactions that should have been able to run independently were blocking each other simply because the data happened to live in the same row. As the system became busier, the chances of deadlocks also increased.
Instead of trying to tune individual queries further, we stepped back and looked at the data model itself. The wide table contained many columns that rarely changed, while only a small subset of attributes was updated frequently. That observation suggested a possible structural improvement.
We decided to apply vertical partitioning. The original wide table was split into several smaller tables that maintained a one-to-one relationship with the main entity. Each
new table grouped together columns that were related in terms of how often they were updated.
Frequently modified attributes were placed in one table, while more stable attributes remained in another. Additional groups of columns were separated into their own tables where appropriate. Logically, the entity still looked the same from the application’s perspective, but physically the data was now distributed across several rows instead of one very wide row.
The impact was noticeable. Updates that previously targeted the same row were now distributed across multiple tables. As a result, concurrent processes were far less likely to block each other because they were no longer competing for the exact same lock. Blocking incidents decreased significantly, and the deadlocks that had appeared periodically almost disappeared.
Queries that required the full entity could still retrieve it through simple joins between the one-to-one tables. Because these relationships were straightforward and properly indexed, the additional joins introduced very little overhead.
What was particularly interesting about this case was that the solution did not involve advanced tuning techniques or hardware changes. The improvement came from reconsidering the structure of the data model and aligning it more closely with how the system actually used the data.
Wide tables are often convenient during early design stages, and over time they tend to grow as systems evolve. But in high-concurrency environments, concentrating many frequently updated attributes in a single row can create hidden contention points.
Sometimes the best way to improve concurrency is simply to distribute the work. By splitting a wide table into several one-to-one related tables, the system can reduce locking conflicts and allow more transactions to proceed independently.
In this customer’s case, a relatively small modeling change made a meaningful difference in the stability and responsiveness of the system.



