Friday, October 11, 2013

View Data Volume Estimates

When the optimizer has to estimate the data volume (the BYTES column in the plan output), it usually bases this information on the column statistics, if applicable and available (think of complex expressions).

However, whenever there is a VIEW operator in an execution plan, that represents an unmerged view, the optimizer obviously "loses" this information and starts applying defaults that are based on the column definition.

Depending on the actual content of the columns this can lead to dramatic differences in data volume estimates.

Both, under- and overestimates are possible, because for character based columns these defaults seem to be based on an assumed 50% fill grade, so a VARCHAR2(100 BYTE) column counts as 50 bytes data volume.

For multi-byte character sets the same rule applies based on the maximum width of a column using the "char" semantics, so a VARCHAR2(1000 CHAR) column counts as 2000 bytes data volume when using the AL32UTF8 character set, which is 50% of the 4000 bytes such a column could require at maximum - so with multi-byte character set this effect can be exaggerated.

The cost calculation of data access operations like full table scans isn't influenced by these different data volume estimates, but the decision for hash joins which of the two rowsources will used as hash and probe are basically driven by the estimated data volume.

Of course the cost estimates of other operations like sorts or aggregates are also based on the data volumes.

But for hash joins particularly the possible difference in data volume estimates can lead to bad decisions, using the effectively larger row source for building the hash table, and therefore leading to slower, less efficient join processing with increased memory, TEMP and CPU usage.

Here is a simple sample demonstrating the point.

First, create two tables, both using VARCHAR2(4000 BYTE) fields, but one has these fields only populated using a single character, whereas the other one fills them completely:

create table t1
as
select
        rownum as t1_id
      , cast('x' as varchar2(4000)) as large_vc1_not_filled
      , cast('x' as varchar2(4000)) as large_vc2_not_filled
      , cast('x' as varchar2(4000)) as large_vc3_not_filled
      , cast('x' as varchar2(4000)) as large_vc4_not_filled
from
        dual
connect by
        level <= 1e5
;

exec dbms_stats.gather_table_stats(null, 't1')

create table t2
as
select
        rownum as t2_id
      , rpad('x', 4000) as large_vc1_filled
      , rpad('x', 4000) as large_vc2_filled
      , rpad('x', 4000) as large_vc3_filled
      , rpad('x', 4000) as large_vc4_filled
from
        dual
connect by
        level <= 1e4
;

exec dbms_stats.gather_table_stats(null, 't2')

So what do we get if we simply join these two tables:

select * from t1, t2 where t1_id = t2_id;

-----------------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 10000 |   152M|       | 13773   (1)| 00:02:46 |
|*  1 |  HASH JOIN         |      | 10000 |   152M|  2448K| 13773   (1)| 00:02:46 |
|   2 |   TABLE ACCESS FULL| T1   |   100K|  1269K|       |    70   (2)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| T2   | 10000 |   152M|       |  6011   (1)| 00:01:13 |
-----------------------------------------------------------------------------------

So we can see that the optimizer understands that the table with more rows actually results in a much smaller row source in terms of data volume as the character columns are only holding only a single character.

What happens if we now deliberately turn the tables into views?

select * from (select /*+ no_merge */ * from t1), (select /*+ no_merge */ * from t2) where t1_id = t2_id;

------------------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 10000 |   152M|       | 47850   (1)| 00:09:35 |
|*  1 |  HASH JOIN          |      | 10000 |   152M|    76M| 47850   (1)| 00:09:35 |
|   2 |   VIEW              |      | 10000 |    76M|       |  6011   (1)| 00:01:13 |
|   3 |    TABLE ACCESS FULL| T2   | 10000 |   152M|       |  6011   (1)| 00:01:13 |
|   4 |   VIEW              |      |   100K|   764M|       |    70   (2)| 00:00:01 |
|   5 |    TABLE ACCESS FULL| T1   |   100K|  1269K|       |    70   (2)| 00:00:01 |
------------------------------------------------------------------------------------

You can now spot what I've described above: The table T2 row source is actually 50% underestimated by the VIEW operator, (152M vs. 76M Bytes), because the character columns are actually filled to their maximum size, whereas the table T1 is heavily overestimated in size now (1269K vs. 764M (!) Bytes), and these differences mean that the hash join now uses the actually much larger row source T2 to build the hash table. You can see the effect already in the estimates of the optimizer - it assumes now a 76M TEMP space usage of the hash join instead of 2448K when simply joining the tables.

As a side note, this is one of the areas where Dynamic Sampling has a severe shortcoming when comparing the estimates to those based on actual statistics.

This is what I get when deleting the stats from both tables and running the simple join again:

exec dbms_stats.delete_table_stats(null, 't1')

exec dbms_stats.delete_table_stats(null, 't2')

select * from t1, t2 where t1_id = t2_id;

-----------------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 10909 |   166M|       | 49209   (1)| 00:09:51 |
|*  1 |  HASH JOIN         |      | 10909 |   166M|    83M| 49209   (1)| 00:09:51 |
|   2 |   TABLE ACCESS FULL| T2   | 10909 |    83M|       |  6011   (1)| 00:01:13 |
|   3 |   TABLE ACCESS FULL| T1   |   102K|   785M|       |    70   (2)| 00:00:01 |
-----------------------------------------------------------------------------------

Since Dynamic Sampling doesn't evaluate the average row size it uses a similar (but somewhat different) assumption as the VIEW operator, and again the hash join due to these estimates uses the "wrong" row source as source for the hash table.

And finally: It gets even worse when using the VIEW variant with Dynamic Sampling:

select * from (select /*+ no_merge */ * from t1), (select /*+ no_merge */ * from t2) where t1_id = t2_id;

------------------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |   102K|  1570M|       | 49209   (1)| 00:09:51 |
|*  1 |  HASH JOIN          |      |   102K|  1570M|    83M| 49209   (1)| 00:09:51 |
|   2 |   VIEW              |      | 10909 |    83M|       |  6011   (1)| 00:01:13 |
|   3 |    TABLE ACCESS FULL| T2   | 10909 |    83M|       |  6011   (1)| 00:01:13 |
|   4 |   VIEW              |      |   102K|   785M|       |    70   (2)| 00:00:01 |
|   5 |    TABLE ACCESS FULL| T1   |   102K|   785M|       |    70   (2)| 00:00:01 |
------------------------------------------------------------------------------------

The VIEW operator now doesn't change the data volume estimate based on Dynamic Sampling information, but the hash join suddenly changes the estimated data volume to 1570M (!) bytes, because the join cardinality estimate is now 102K rows instead of the more realistic 10.000 - so the cardinality estimate is now screwed due to the VIEW operator.

Summary


If you happen to have a large discrepancy between the column definitions and the actual column usage, which is particularly relevant for character based columns, the data volume estimates can vary significantly between merged and non-merged views. The usage of multi-byte character sets can exaggerate this effect in case of char semantics.

Most significantly this can lead to bad decisions regarding hash joins, using the larger rowsource as hash table.

Whether this effect of the VIEW operator is a feature or a bug I can't tell, there might good reasons why the information about the column statistics gets lost, but it certainly can lead to performance problems in particular with hash joins.

The effect can be reproduced across all currently supported versions including 12.1.

No comments: