In part 1 I provided a basic comparison between temporary tables and table variables. If you haven't read it, then I recommend you read it before reading part 2 (this post).
In this blog post I am going to focus on a performance comparison between the two. As I explained in part 1, the biggest difference between the two, which is related to performance, is statistics. Temporary tables have statistics, very similar to regular tables, while table variables don't have statistics at all.
Statistics are the number-one tool that the cardinality estimator has in order to estimate the cardinality (number of rows) in various stages in the execution plan.
For example, let's assume that the Casino.Games table has one million rows, and we run the following query:
Profit BETWEEN $10.00 AND $20.00;
There is an index on the Profit column, and the optimizer needs to estimate the number of rows in the table in which the profit is between $10.00 and $20.00. If there are few rows, then it will probably choose an index seek with a few key lookups. But if there are many rows, then it will prefer to perform a clustered index scan.
Here is the execution plan:
See how the cardinality estimator used the statistics on the Profit column to come up with a perfect estimate of the number of rows. Based on this estimation, the optimizer chose to use a clustered index scan, and avoid 54,415 key lookups.
Now, let's try to do the same thing with a temporary table. First, we will create the temporary table, then we will copy the data from the Casino.Games table to the temporary table, then we will create a non-clustered index on the Profit column, and finally, we will run the same query against the temporary table.
GameId INT NOT NULL ,
GameType NVARCHAR(50) NOT NULL ,
PlayerId INT NOT NULL ,
BetAmount MONEY NOT NULL ,
Profit MONEY NOT NULL ,
PRIMARY KEY CLUSTERED