Hello, i have a big performance problem on an oracle 11g database. I have a table with 4 columns:
I have about 12 million records in the table. Now I have a statement like this:
The problem is that if the query period is far in the past, the performance is much better than a query period in the present or in the future. My index consists loc_id, starttime and endtime. Is there any way to increase the performance of this statement? asked Oct 11 '11 at 15:19 Sylo Markus Winand ♦♦ |
Hi! You are running into the chess-board problem, described in this article on index merge. That means, you have more than one independent range condition. Your particular query has actually three independent (on different columns) range conditions (because LIKE 'xxx%' is a range condition as well). There are only two things you could do differently:
Posting an execution plan is always helpful: how-to get an Oracle execution plan. answered Oct 11 '11 at 16:52 Markus Winand ♦♦ |
Hello! Thanks for the quick answer. The number of rows returned is the same but there are much less different loc_ids in the table than rows (About 200 different loc_ids and 12 Million rows) and the loc_ids differ mostly in the last digits (e.g. 00000000000300100001, 00000000000300200001, 00000000000300300001, ...) because its an location code: 000000000003 means germany and 000000000001 means USA. And till now there are only data from germany inserted in the table. Here is the execution plan
answered Oct 11 '11 at 18:41 Sylo |
I guess, it is the same execution plan, no matter which dates you query? Are your statistics up-to date? Currently, your LIKE filter selects every row, because you table contains records for Germany only, right? Is that going to change? If yes, can you test with more appropriate test data? Testing with incomplete data results in wrong optimizations very often, so I advice to take/generate appropriate test data. If I understood your LIKE filter correctly, the INDEX RANGE SCAN will actually scan the entire index and then access the table to fetch the matching rows. A full table scan could be faster, in that case. However, if you add other location codes, it will read a part of the index only, so that the full table scan doesn't make sense anymore. If you LIKE filter will stay unselective, you should use another column on the first index position (since you have range conditions only, you should take the most selective column -- the only exception to the myth). answered Oct 14 '11 at 08:34 Markus Winand ♦♦ |