Symptom
SQL statements run for a long time or consume a high amount of resources in terms of memory and CPU.
Environment
SAP HANA
Cause
1. Where do I find information about SQL statement tuning provided by SAP?
2. Which indications exist for critical SQL statements?
3. Which prerequisites are helpful for SQL statement tuning?
4. How can I identify a specific SQL statement?
5. What is an expensive SQL statement?
6. How can time information in the SQL cache (M_SQL_PLAN_CACHE) be interpreted?
7. How can I determine the most critical SQL statements?
8. How can I determine and interpret basic performance information for a particular SQL statement?
9. Which options exist to understand the execution of a SQL statement in detail?
10. What are typical approaches to tune expensive SQL statements?
11. Are secondary indexes required to provide an optimal performance?
12. Which advanced features exist for tuning SQL statements?
13. Are there standard recommendations for specific SQL statements available?
14. Is it required to create optimizer statistics in order to support optimal execution plans?
15. Are all database operations recorded in the SQL cache (M_SQL_PLAN_CACHE)?
16. Can sorting be supported by an appropriate index?
17. Is it possible to capture bind values of prepared SQL statements?
18. How can the performance of INSERTs and data loads be tuned?
19. Why are there significant differences between SQL statements on ABAP and SAP HANA side?
20. How does the EXPLAIN functionality work?
2. Which indications exist for critical SQL statements?
3. Which prerequisites are helpful for SQL statement tuning?
4. How can I identify a specific SQL statement?
5. What is an expensive SQL statement?
6. How can time information in the SQL cache (M_SQL_PLAN_CACHE) be interpreted?
7. How can I determine the most critical SQL statements?
8. How can I determine and interpret basic performance information for a particular SQL statement?
9. Which options exist to understand the execution of a SQL statement in detail?
10. What are typical approaches to tune expensive SQL statements?
11. Are secondary indexes required to provide an optimal performance?
12. Which advanced features exist for tuning SQL statements?
13. Are there standard recommendations for specific SQL statements available?
14. Is it required to create optimizer statistics in order to support optimal execution plans?
15. Are all database operations recorded in the SQL cache (M_SQL_PLAN_CACHE)?
16. Can sorting be supported by an appropriate index?
17. Is it possible to capture bind values of prepared SQL statements?
18. How can the performance of INSERTs and data loads be tuned?
19. Why are there significant differences between SQL statements on ABAP and SAP HANA side?
20. How does the EXPLAIN functionality work?
Resolution
1. Where do I find information about SQL statement tuning provided by SAP?
The SAP HANA Troubleshooting and Performance Analysis Guide contains detailed information about tuning SQL statements.If you want to optimize ABAP coding for SAP HANA, you can refer to the blog Performance Guidelines for ABAP Development on SAP HANA.
2. Which indications exist for critical SQL statements?
The following SAP HANA alerts indicate problems in the SQL area:Alert | Name | SAP Note | Description |
39 | Long-running statements | 1977262 | Identifies long-running SQL statements. |
Check ID | Details |
1110 | SQL using in average > 1 connection (last day) |
1112 | SQL using in average > 1 thread (last hour) |
1113 | SQL using in average > 1 thread (last day) |
1115 | Longest running current SQL statement (s) |
1118 | Longest running current job (s) |
1120 | Exp. stmt. trace: SQL running > 1 h (last day) |
1160 | Average COMMIT time (ms) |
3. Which prerequisites are helpful for SQL statement tuning?
See SAP Note 2000000 for more information about an optimal SAP HANA performance configuration and useful prerequisites for performance and SQL statement optimization.4. How can I identify a specific SQL statement?
Quite obviously a specific SQL statement can be identified by its SQL text. Due to the fact that the text can be quite long and there can be different similar SQL texts it is useful to identify a SQL statements based on a hash value that is derived from the SQL text. This hash value is called statement hash and can be found in column STATEMENT_HASH of SAP HANA performance views like M_SQL_PLAN_CACHE, M_EXPENSIVE_STATEMENTS or M_SERVICE_THREAD_SAMPLES.The statement ID (column STATEMENT_ID) can't be used for that purpose as it is based on a connection ID and not on a SQL statement text.
5. What is an expensive SQL statement?
An expensive SQL statement is a database access that shows high values in areas like:- High execution time: Usually most important in the area of performance analysis are SQL statements with a particular high overall runtime. Usually the overall runtime counts, so it doesn't matter if a statement is executed 1 time with a duration of 1000 seconds or if it is executed 1 million times with an average duration of 1 ms, because in both cases the statement is responsible for a total duration of 1000 seconds.
- High memory utilization
- High CPU consumption
- High lock wait time
6. How can time information in the SQL cache (M_SQL_PLAN_CACHE) be interpreted?
Tables like M_SQL_PLAN_CACHE, M_SQL_PLAN_CACHE_RESET and HOST_SQL_PLAN_CACHE contain several time related columns and it is not always clear how to interpret them:Time column type | Description |
CURSOR |
Contains the overall cursor time including SAP HANA server time and client time If the client performs other tasks between fetches of data, the cursor time can be much higher than the SAP HANA server time. This can result in MVCC issues because old versions of data need to be kept until the execution is finished. |
EXECUTION |
Contains the execution time (open + fetch + lock wait + close) on SAP HANA server side, does not include table load and preparation time |
EXECUTION_OPEN |
Contains the open time on SAP HANA server side Includes the actual retrieval of data in case of column store accesses with early materialization |
EXECUTION_FETCH |
Contains the fetch time on SAP HANA server side Includes the actual retrieval of data in case of row store accesses or late materialization |
EXECUTION_CLOSE | Contains the close time on SAP HANA server side |
TABLE_LOAD | Contains the table load time during preparation, is part of the preparation time |
PREPARATION | Contains the preparation time |
LOCK_WAIT |
Contains the transaction lock wait time, internal locks are not included |
From a SQL tuning perspective the most important information is the total elapsed time of the SQL statement which is the sum of preparation time and execution time.
7. How can I determine the most critical SQL statements?
For a pure technical SQL optimization it is useful to determine the most expensive SQL statements in terms of elapsed time and check the result list top down. The most expensive SQL statements in the SQL cache can be determined in the following ways:- M_SQL_PLAN_CACHE / HOST_SQL_PLAN_CACHE
- SAP HANA Studio -> Administration -> Performance -> SQL Plan Cache
- ABAP: DBACOCKPIT -> Performance -> SQL Plan Cache
- SQL: "HANA_SQL_SQLCache" (SAP Note 1969700)
- Solution Manager self service "SQL Statement Tuning" (SAP Note 1601951)
Trace | Environment | SAP Note | Detail |
ST05 | ABAP | SQL trace | |
ST12 | ABAP | Single transaction analysis | |
SQLM | ABAP | 1885926 | SQL monitor |
SQL trace | SAP HANA Studio, SQL, ABAP (DBACOCKPIT) | 2031647 | SQL trace |
Expensive statement trace | SAP HANA Studio, SQL, ABAP (DBACOCKPIT) | 2180165 | Expensive statement trace |
The currently running SQL statements can be determined via SQL: "HANA_SQL_ActiveStatements" (SAP Note 1969700).
In all cases you get a list of SQL statements / STATEMENT_HASH values that subsequently be analyzed in more detail.
8. How can I determine and interpret basic performance information for a particular SQL statement?
The following SQL statements available via SAP Note 1969700 can be used to collect further details for a given STATEMENT_HASH (to be specified in "Modification section" of the statements):SQL command | Details |
SQL: "HANA_SQL_StatementHash_BindValues" | Captured bind values from SQL cache in case of long running prepared SQL statements (SPS 08 and higher) |
SQL: "HANA_SQL_StatementHash_KeyFigures" | Important key figures from SQL cache, for examples see below |
SQL: "HANA_SQL_StatementHash_SQLCache" | SQL cache information (current and / or historic) like executions, records, execution time or preparation time |
SQL: "HANA_SQL_StatementHash_SQLText" | SQL statement text |
SQL: "HANA_SQL_ExpensiveStatements" | Important key figures from expensive statement trace (SAP Note 2180165) |
SQL: "HANA_SQL_ExpensiveStatements_BindValues" | Captured bind values from expensive statement trace (SAP Note 2180165) |
Scenario 1: Transactional lock waits
We can see that nearly the whole execution time is caused by lock wait time, so transactional lock (i.e. record or object locks) are responsible for the long runtime. Further transactional lock analysis can now be performed based on SAP Note 1999998.
Scenario 2: High number of executions
An elapsed time of 0.55 ms for 1 record is not particularly bad and for most of the SQL statements no further action is required. In this particular case the number of executions is very high, so that overall the execution time of this SQL statement is significant. Optimally the number of executions can be reduced from an application perspective. If this is not possible, further technical analysis should be performed. Really quick single row accesses can be below 0.10 ms, so there might be options for further performance improvements (e.g. index design or table store changes).
Scenario 3: High elapsed time
An execution time of 284 ms for retrieving one row from a single table is definitely longer than expected and it is very likely that an improvement can be achieved. Further analysis is required to understand the root cause of the increased runtime.
Scenario 4: High elapsed time for DML operations, no lock wait time
An elapsed time of 10 ms for inserting a single record is quite high. If DML operations have an increased elapsed time it can be caused by internal locks that are e.g. linked to the blocking phase of savepoints. Further internal lock analysis can now be performed based on SAP Note 1999998.
Scenario 5: Many records
Reading about 200,000 records in less than 2 seconds is not a bad value. In the first place you should check from an application perspective if it is possible to reduce the result set or the number of executions. Apart from this there are typically also technical optimizations available to further reduce the elapsed time (e.g. delta storage or table store optimizations).
Scenario 6: High Preparation Time
Preparation time is linked to the initial parsing and compilation. You can reduce this overhead by using bind variables rather than literals so that several SQL statements with the same structure need to be parsed only once. Furthermore you can perform a more detailed analysis to understand why the preparation step is taking longer than expected.
9. Which options exist to understand the execution of a SQL statement in detail?
In order to understand how a SQL statement can be tuned it is essential to know how it is processed. For this purpose you can use the following tools:Tool | Details |
Explain |
High-level information for SQL execution (e.g. joins, used indexes)
|
PlanViz |
Detailed graphical insight in SQL execution
|
Thread sample analysis |
High-level thread state and lock type information (e.g. useful in
case of waits for internal locks which are not reflected in the "lock
wait time")
|
Performance trace |
Detailed insight in SQL execution including SQL plan and function profiling. See SAP Note 1787489 for more details. |
User-specific trace |
Granular trace information for configurable components of the SQL statement execution
|
10. What are typical approaches to tune expensive SQL statements?
Depending on the results of the analysis of an SQL statement the following optimizations can be considered.Possible symptoms | Optimization | Optimization Details |
High number of executions | Reduce high number of executions | Check from application perspective if the number of executions can be reduced, e.g. by avoiding identical SELECTs or adjusting the application logic. |
High number of selected records | Reduce number of selected records |
Check from application perspective if you can restrict the number of
selected records by adding further selection conditions or modifying the
application logic. Check if you can reduce the amount of relevant data by archiving or cleanup of basis tables (SAP Note 706478). |
High lock wait time due to record locks |
Reduce record lock contention |
Check from application perspective if you can reduce concurrent changes of same records. Check from application perspective if you can reduce the critical time frame between change operation and next COMMIT. See SAP Note 1999998 for more information. |
High lock wait time due to object locks | Reduce object lock contention |
Check if you can schedule critical offline DDL operations less often or at times of lower workload. Check if you can use online instead of offline DDL operations. See SAP Note 1999998 for more information. |
High total execution time, significant amount of thread samples pointing to internal lock waits | Reduce internal lock contention | Check if you can reduce the internal lock wait time (SAP Note 1999998). |
Execution time higher than expected, optimal index doesn't exist Table and column scan related thread methods and details like searchDocumentsIterateDocidsParallel, IndirectScanBvOutJob<range>, IndirectScanVecOutJob<range>, sparseSearch, SparseScanRangeVecJob |
Synchronize index and application design |
Check from an application perspective if you can adjust the database request so that existing indexes can be used efficiently. Create a new index or adjust an existing index so that the SQL statement can be processed efficiently:
|
Execution time higher than expected, negative impact by existing partitioning | Synchronize partitioning and application design |
Consider the following possible optimization approaches:
|
Long runtime with OR condition having selection conditions on both tables | Avoid OR in this context |
If an OR concatenation is used and the terms reference columns of
more than one table, a significant overhead can be caused by the fact
that a cartesian product of both individual result sets is required in
some cases. If you face performance issues in this context, you can check if the problem disappears with a simplified SQL statement accessing only one table. If yes, you should check if you can avoid joins with OR concatenated selection conditions on both tables. |
Execution time higher than expected, significant portion for accessing delta storage | Optimize delta storage |
Make sure that the auto merge mechanism is properly configured. See SAP Note 2057046 for more details. Consider smart merges controlled by the application scenario to make sure that the delta storage is minimized before critical processing starts. |
Execution time slightly higher than expected | Change to row store |
In general the number of tables in the row store should be kept on a
small level, but under the following circumstances it is an option to
check if the performance can be optimized by moving a table to the row
store:
|
Execution time sporadically increased | Avoid resource bottlenecks |
Check if peaks correlate to resource bottlenecks (CPU, memory, paging) and eliminate bottleneck situations. |
Execution time higher than expected, significant portion for sorting (trex_qo trace: doSort) | Optimize sorting |
Sort operations (e.g. related to ORDER BY) are particularly expensive if all of the following conditions are fulfilled:
|
Long BW query runtime Long execution time of TREXviaDBSL calls Long execution time of TREXviaDBSLWithParameter calls |
Optimize BW queries |
If you aren't aware about the actual query, yet, get in touch with
the application team to understand which actual application operations
result in long running TREXviaDBSL calls and start the analysis from
that perspective. When you are able to reproduce the issue, more
detailed traces can be activated as required. BW allows executing queries in different execution modes which control the utilization of SAP HANA internal engines. Furthermore it is possible and usually recommended to convert infocubes to the HANA optimized type. See BW on HANA and the Query Execution Mode and SAP Notes 1682992 and 1734002 for further information and check if modifying the execution mode and / or the infocube type improves the performance of BW queries. See SAP Note 2016832 that describes how to adjust the query execution mode. |
Long BW DTP and transformation runtime using SAP HANA execution mode | Optimize BW DTP and transformation setup |
See SAP Notes 2033679 (BW 7.40 SP 05 - 07) and 2067912 (BW 7.40 SP 08 - 10) in order to make sure that all recommended fixes are implemented. For example, SAP Note 2133987 provides a coding correction for SAPKW74010 in order to speed up delta extractions from a DSO. See SAP Note 2057542 and consider SAP HANA based transformations available as of BW 7.40 SP 05. |
Long runtime of FOR ALL ENTRIES query | Adjust FOR ALL ENTRIES transformation |
If a FOR ALL ENTRIES selection in SAP ABAP environments takes long
and consumes a lot of resources, you can consider adjusting the way how
it the database requests are generated.
|
Long runtime of query on SAP HANA DDIC objects | Assign CATALOG READ |
If the CATALOG READ privilege is not assigned to a user, queries on
SAP HANA DDIC objects like TABLES, INDEXES or TABLE_COLUMNS can take
much longer, because SAP HANA needs to filter the relevant (own) data
and suppress the display of information from other schemas. Make sure
that CATALOG READ is assigned (either directly or indicrectly via roles)
to users having to access SAP HANA DDIC objects. You can use SQL: "HANA_Security_GrantedRolesAndPrivileges" (SAP Note 1969700) to check if this privilege is already assigned or not. |
Long runtime of queries on monitoring views | Use fewer and larger selections Select information from other sources |
For technical reasons accesses to monitoring views like
M_TABLE_LOCATIONS or M_TABLE_PERSISTENCE_LOCATIONS often scan the
complete underlying structures regardless of the WHERE clause. Thus, you
should avoid frequent selections of small amounts of data (e.g. one
access per table) and use fewer selections reading larger amounts of
data (e.g. for all tables of a schema at once). Alternatively check if you can select the required information from other sources, e.g. monitoring view M_CS_TABLES. Similar overhead is also required for other monitoring views. The column FILTER_PUSHDOWN_TYPE in internal view SYS.M_MONITOR_COLUMNS_ provides information to what extent a column supports the pushdown of filters. |
Wrong join order | Update join statistics |
An obviously wrong join order can be caused by problems with join
statistics. Join statistics are created on the fly when two columns are
joined the first time. Up to SPS 08 the initially created join
statistics are kept until SAP HANA is restarted. This can cause trouble
if join statistics were created at a time when the involved tables had a
different filling level, e.g. when they were empty. In this case you
can restart the indexserver in order to make sure that new join
statistics are created. Starting with SPS 09 SAP HANA will automatically
invalidate join statistics (and SQL plans) when the size of an involved
table changed significantly. If you suspect problems with join statistics, you can create a join_eval trace (SAP Note 2119087) and check for lines like: JoinEvaluator.cpp(01987) : getJoinStatistics: SAPSR3:CRMM_BUAG (-1)/BUAG_GUID<->SAPP25:CRMM_BUAG_H (-1)/BUAG_GUID JoinEvaluator.cpp(01999) : jStats clSizesL:0 clCntAL:0 cntSJL:0 TTL:0 clSizesR:0 clCntAR:0 cntSJR:0 TTR:0Zero values in the second line can indicate that join statistics were created at a time when one table was empty. Join statistics are a SAP HANA internal concept and so they can't be displayed or adjusted. |
High runtime, not reproducible in SAP HANA Studio / DBACOCKPIT | Recompile execution plan |
If the runtime of a query is longer than expected and you can't reproduce the long runtime with SAP HANA Studio or DBACOCKPIT (if bind variables are used: using a prepared statement with proper bind values), the issue can be caused by an inadequate execution plan (e.g. generated based on the bind values of the first execution or based on statistical information collected during the first execution). In this case you can check if an invalidation of the related SQL cache entry can resolve the issue:ALTER SYSTEM RECOMPILE SQL PLAN CACHE ENTRY '<plan_id>'You can identify the plan ID related to a statement hash by executing SQL: "HANA_SQL_SQLCache" (STATEMENT_HASH = '<statement_hash>', AGGREGATE_BY = 'NONE', DATA_SOURCE = 'CURRENT') available via SAP Note 1969700. Depending on the factors considered during next parsing (e.g. set of bind values, dynamic statistics information) a better execution plan may be generated. It can happen that you have to repeate the RECOMPILE command until a good set of bind values is parsed. Even if the problem is resolved after the RECOMPILE, there is no guarantee that it is permanently fixed, because after every eviction or restart a new parsing happens from scratch. If the problem is supposed to be linked to bind values, you can consider to adjust the application so that different classes of bind values are executed with slightly different SQL statements, so that SAP HANA can parse each statement individually. As of SPS 09 more frequent reparses happen and so the situation can improve. As of SPS 09 you can also think about the IGNORE_PLAN_CACHE hint as a last resort (see SAP Note 2142945). Be aware that this will make the performance more predictable, but due to the permanent parsing requirements the quick executions can significantly slow down. So it should only be used in exceptional cases. |
Long preparation times | Optimize parsing |
See SAP Note 2124112 and check if the amount and duration of preparations can be reduced (e.g. by increasing the SQL cache or using bind variables). |
High runtime with range condition on multi column index | Avoid selective ranges on multi column indexes |
Range conditions like BETWEEN, "<", ">", ">=", "<=" OR LIKE can't be used in order to restrict the search in a multi column index. As a consequence the performance can be significantly worse than expected. Possible solutions are:
|
High runtime with multiple OR concatenated ranges on indexed column | Decompress column |
Due to a design limitation with SPS <= 100 SAP HANA doesn't use an index on an advanced compressed column if multiple OR concatenated range conditions exist on the same column. As a consequence statements likeSELECT ... FROM COSP WHERE ... AND ( OBJNR BETWEEN ? AND ? OR OBJNR BETWEEN ? AND ? )can have a long runtime and high resource consumption. As a workaround you can only use DEFAULT compression for the table. See SAP Note 2112604 for more information. |
High runtime with multiple EXISTS in combination with OR | Consider UNION or upgrade to Rev. >= 111 |
Up to SAP HANA Rev. 110 multiple EXISTS semi-joins are not evaluated optimally if combined with OR. As a workaround you can check if OR can be transformed into UNION. As a permanent solution you can use SAP HANA Rev. >= 111. |
High runtime although good single column index exists | Make sure that index column isn't compressed with SPARSE or PREFIXED |
If a column is compressed with type SPARSE or PREFIXED, an index on this column doesn't take effect. In order to use a different compression for the column, you can run the following command:UPDATE "<table_name>" WITH PARAMETERS ('OPTIMIZE_COMPRESSION' = 'FORCE', 'COLUMNS' = ('<column_name>') )See SAP Note 2112604 for more information related to SAP HANA compression. |
High runtime with join and TOP <n> | Avoid TOP selections on unselective joins |
SAP HANA performs a significant amount of join activities on the overall data set before finally returning the TOP <n> records. Therefore you should consider the following options:
|
High runtime of MIN and MAX searches | Avoid frequent MIN and MAX searches on large data volumes |
Indexes in SAP HANA can't be used to identify the maximum or minimum value of a column directly. Instead the whole column has to be scanned. Therefore you avoid frequent MAX or MIN searches on large data volumes. Possible alternatives are:
|
High runtime when LIKE condition with leading place holder is evaluated | Avoid LIKE conditions with leading place holder |
The evaluation of a LIKE condition with a leading place holder (e.g. '%1234') can consume significant time on large tables. The runtime can be high even if the result set was already reduced to a small amount of records (> 0) before evaluating the LIKE condition. In certain cases you can use LIKE_REGEXP (with the appropriate search pattern) instead of LIKE as a workaround. This should be tested thoroughly because LIKE_REGEXP can also impose overhead in other scenarios. |
High runtime of COUNT DISTINCT | Use SPS 09 or OLAP engine |
if SAP HANA <= SPS 08 is used and a COUNT DISTINCT is executed on a column with a high amount of distinct values, a rather larger internal data structure is created regardless of the actual number of records that have to be processed. This can significantly increase the processing time for COUNT DISTINCT. As a workaround you can check if the SQL statement is processed more efficiently using the OLAP engine by using the USE_OLAP_PLAN hint (see SAP Note 2142945). As a permanent solution you have to upgrade to SPS 09 or higher, so that the size of the internal data structure takes the amount of processed records into account. |
High mass UPDATE runtime | Use primary key for updating |
Updating a high amount of records in a single command (e.g. "UPDATE ... FROM TABLE" in ABAP systems) is more efficient than performing individual UPDATEs for each record, but it still can consume significant time. A special UPDATE performance optimization is available when the UPDATE is based on a primary key. So if you suffer from long mass UPDATE runtimes you can check if you can implement an appropriate primary key. A unique index is not sufficient for this purpose, a real primary key constraint is needed. |
High runtime of TOP 1 requests | Check TOP 1 optimization |
If on SAP HANA SPS >= 09 a query with a TOP 1 restriction (e.g. "SELECT TOP 1 ...") runs much longer than expected and call stacks (e.g. via SQL: "HANA_Threads_Callstacks", SAP Note 1969700) indicate that most of the time is spent in UnifiedTable::MVCCObject::generateOLAPBitmapMVCC, you can check if disabling the TOP 1 optimization feature can be used as a workaround (see SAP Note 2238679):indexserver.ini -> [search] -> qo_top_1_optimization = false |
High runtime of certain queries with Rev. 90 to 97.01 in UnifiedTable::MVCCObject coding | Upgrade to Rev. >= 97.02 |
Similar to the TOP 1 issue above also other queries can suffer from long runtimes in modules like UnifiedTable::MVCCObject::isTSBlockGarbageCollected or UnifiedTable::MVCCObject::generateOLAPBitmapMVCC. One main root cause is fixed as of Rev. 97.02, so an upgrade can be considered in case the performance seriously impacts production operation. |
Sporadically increased runtimes of calculation scenario accesses | Check calculation engine cache size |
If accesses to calculation scenarios are sometimes slow, cache displacements can be responsible. You can use SQL: "HANA_Configuration_MiniChecks" (SAP Note 1999993, check ID 460) and SQL: "HANA_CalculationEngine_CalculationScenarios" (SAP Note 1969700) to find out more. If the cache is undersized, you can increase it using the following parameter (default: 1048576):indexserver.ini -> [calcengine] -> max_cache_size_kb = <size_kb>See also SAP Note 1988050 for specific problems with SAP HANA <= Rev. 73. |
Increased runtime of joins | Check table locations |
Make sure that tables involved in critical joins are located on the same host whenever possible, so that unnecessary communication overhead between SAP HANA nodes can be avoided. |
High runtime accessing row store tables | Check garbage collection |
A high runtime on row store tables can be caused by a high number of versions, e.g. due do a blocked garbage collection. See SAP Note 2169283 for more information about SAP HANA garbage collection. |
Bad performance on specific row store table, unexpected UNION ALL in execution plan | Check table containers |
For some reasons (e.g. when a column is added) a row store table can consist out of more than one underlying container. As a consequence, SAP HANA needs to combine the results from several containers using UNION ALL. Existing indexes may only work for a subset of the containers and so they aren't used in the most efficient way. In order to check for the number of containers and generate cleanup commands, you can use SQL: "HANA_Tables_RowStore_TablesWithMultipleContainers" (SAP Note 1969700). A table can be merged into a single container by reorganizing it with the following command:ALTER TABLE "<table_name>" RECLAIM DATA SPACEBe aware that this activity requires a table lock and so concurrent accesses to the same table may be blocked. Therefore it is recommended to perform it during a time of reduced workload. Furthermore you can set a low lock wait timeout on transaction level (e.g. 10000 ms) in order to reduce the risk of long-term lock escalations: SET TRANSACTION LOCK WAIT TIMEOUT <timeout_in_ms> |
High runtime of COMMIT operations | Check disk I/O and system replication |
If COMMITs take in average more than a few ms, this is an indication for a bottleneck. See SAP Note 2000000 ("How can the performance of COMMIT operations be optimized?") for more information. |
Unexplainable long runtime | Optimize internal processing |
If you experience a runtime of a SQL statement that is much higher than expected, but none of the above scenarios apply, a few more general options are left. See SAP Note 2142945 and test if the performance of SQL statements improves with certain hints (e.g. USE_OLAP_PLAN, NO_USE_OLAP_PLAN), because this can provide you with more ideas for workarounds and underlying root causes. Check if SAP Notes exist that describe a performance bug for similar scenarios. Check if the problem remains after having implemented a recent SAP HANA patch level. Open a SAP incident on component HAN-DB in order to get assistance from SAP support. |
11. Are secondary indexes required to provide an optimal performance?
SAP HANA is able to process data efficiently so that often a good performance is possible even without the use of indexes. In case of frequent, selective accesses to large tables it is nevertheless useful to create additional secondary indexes. As a rule of thumb column store indexes should be created on single columns whenever possible, because single column indexes require much less memory compared to multi column indexes.See SAP Note 2160391 for more details.
12. Which advanced features exist for tuning SQL statements?
The following advanced features exist to optimize SQL statements:Feature | SAP Note | Details |
Hints | 2142945 | If the long runtime is caused by inadequate execution plans, you can influence them by using SAP HANA hints. |
Result cache | 2014148 | If complex queries on rather static tables have to be executed repeatedly for the same result, you can consider using the query result cache. |
13. Are there standard recommendations for specific SQL statements available?
In general it is hard to provide standard recommendations because optimizations often depend on the situation on the individual system. Nevertheless there are some exceptions where the underlying root cause is of a general nature. These SQL statements are collected in the following table:Statement hash | Type | Object | Optimization |
a35caae16f2fa43033c8c4f278483d0c e5332b10f3a1a4215728857efc0f8eda f9468e4f53d23d0dd90230463b544c3c 48482a15fb005f2ef288ef198c0b1275 |
CALL |
ALERT_BACKUP_LONG_LOG_BACKUP |
These calls are issued by the SAP HANA statistics server. Usually a long runtime is caused by a large backup catalog. See SAP Note 2147247 ("How can the runtime and CPU requirements of the statistics server actions be analyzed and optimized?") and check if the backup catalog can be reduced. Alternatively you can increase the execution interval. |
c0f43e5dbdfc438b86964acb0a22c05f various |
DELETE CALL |
HELPER_ALERT_MON_PYTHONTRACE_ACTIVE_AGE ALERT_MON_PYTHONTRACE_ACTIVE |
This statistics server check accesses the file-based M_INIFILE_CONTENTS view among others. As a consequence its runtime can suffer from disk I/O problems. Check if there are times of significantly increased execution times. If yes, analyze the disk I/O performance at this time (SAP Note 1999930). In case of correlated I/O problems it is likely that they are responsible for the long runtime of ALERT_MON_PYTHONTRACE_ACTIVE. |
eb82038136e28e802bd6913b38a7848c |
CALL |
BW_CONVERT_CLASSIC_TO_IMO_CUBE |
This procedure is executed when a classic infocube is converted to an in-memory optimized infocube using transaction RSMIGRHANADB. Increased load is normal when large infocubes are converted. After the conversion of the existing infocubes is finished, executing this procedure is no longer required, so it is only a temporary activity. |
651d11661829c37bf3aa668f83da0950 42bf4a47fbf7eabb5e5b887dffd53c7c |
CALL INSERT |
COLLECTOR_HOST_CS_UNLOADS HOST_CS_UNLOADS_BASE |
Disable the data collection for HOST_CS_UNLOADS as described in SAP Note 2084747. This action is also recommended in SAP Note 2147247. |
ac52398f58a752bed7843aac8d829464 e9f2feac54a048984928f3fcbcfacedd |
CALL INSERT |
COLLECTOR_HOST_RS_INDEXES HOST_RS_INDEXES_BASE |
These statements are related to the row store index history collection performed by the statistics server (see SAP Note 2147247). Consider reducing the amount of data in large row store tables (e.g. based on SAP Note 706478) or moving tables with large indexes to column store in order to improve the runtime of these queries. |
0c2d40de2702bc576de87a9aec2cef30 f45e2eaba236b6655804f6585d631923 |
CALL INSERT |
COLLECTOR_GLOBAL_ROWSTORE_TABLES_SIZE GLOBAL_ROWSTORE_TABLES_SIZE_BASE |
These statements are related to the row store table size history collection performed by the statistics server (see SAP Note 2147247). Consider reducing the amount of data in large row store tables (e.g. based on SAP Note 706478) or moving large tables to column store in order to improve the runtime of these queries. |
various | SELECT | BUT000, BUT020, ADRC |
Expensive SELECTs on tables BUT000, BUT020 and ADRC from CL_BUPA_IL_SEARCH_SERVICE->SEARCH_BY_ADDRESS2 can be optimized via SAP Note 1792008 by using switch CRM_PERF_BP_SEARCH. |
68f35c58ff746e0fe131a22792ccc1b5 | CALL | BW_F_FACT_TABLE_COMPRESSION |
It is normal to see a significant cumulated runtime, because it is a central procedure call for all F fact table compressions. This procedure performs BW compression on F fact tables (i.e. elimination of requests and transition of data into E fact table or dedicated F fact table partition). If F fact tables with a significant amount of records are processed, a significant runtime is expected. Otherwise you can use SQL: "HANA_Threads_ThreadSamples_FilterAndAggregation" (SAP Note 1969700) in order to check for the THREAD_DETAIL information related to this statement hash, which contains the involved table names. Based on this information you can check if you can optimize BW F fact table compression for the top tables. Be aware that the THREAD_DETAIL information is only available when the service_thread_sampling_monitor_thread_detail_enabled parameter is set to true (see SAP Note 2114710). |
783ed7080a18c2f0684a1765c95354e6 |
SELECT | CDHDR, CDPOS, CDPOS_UID, CDPOS_STR |
This statement includes three OR concatenated EXISTS subqueries that can't be handled optimally with SAP HANA Rev. <= 110. See "What are typical approaches to tune expensive SQL statements" -> "High runtime with multiple EXISTS in combination with OR" for details and consider an upgrade to SAP HANA Rev. >= 111 in order to eliminate the underlying limitation. |
a7b7221b320c2cf6d25fd5842bf89ec4 e6c9e30929d7a58fd740be11b9d63204 611684d486918a356f1bbb14b790c17a |
SELECT | CDPOS |
Implement the dbsl_equi_join hint as described in SAP Note 2162575, so that the SQL statement is processed via IN list rather than OR concatenation. Alternatively avoid blanks at the end of TABKEY, so that the OR concatenation of LIKE conditions on TABKEY is no longer generated. |
26e1ca7b731a467f2db818753d80118f |
SELECT | CRMD_ORDER_INDEX |
This access suffers from the COUNT DISTINCT limitation of SAP HANA SPS <= 08 and can be improved with the USE_OLAP_PLAN hint. SAP Note 2255511 provides the coding correction. |
0e483b3074906106fd7c321a30fdea85 |
SELECT | CRMORDERLOPR |
This SELECT is triggered by SAP ABAP table buffer reloads that can happen frequently because of invalidations. See SAP Note 1916476 for more information and disable table buffering for CRMORDERLOPR. |
afb54cf91cd5d51ac7f360f82e67d5dd |
INSERT | DDLOG |
INSERTs into DDLOG can be impacted by the underlying SAP HANA sequence DDLOG_SEQ (see e.g. SAP Note 1977214). It is generally recommended to activate caching for this sequence. You can use the following command to activate a cache with 1000 elements for the sequence:ALTER SEQUENCE <schema_owner>.DDLOG_SEQ CACHE 1000 |
91d7aff2b6fb6c513293d9359bf559a6 |
CALL | DSO_ACTIVATE_PERSISTED |
It is normal to see a significant cumulated runtime, because it is a central procedure call for all DSO activations. This procedure is the HANA server side implementation of DSO activation in BW environments. During this activation data is moved between three involved tables (/BIC/A*40, /BIC/A*00 and /BIC/B000*). It is much more efficient than the previous, SAP application server based approach, because moving a high amount from the database to the application and back is no longer required. If the DSO activation in distributed environments takes longer than expected, you should check if the partitioning of the /BIC/A*40 and /BIC/A*00 tables is consistent, i.e. same partitioning and same hosts for the related partitions. You can check this DSO consistency via SQL: "HANA_BW_InconsistentDSOTables" (SAP Note 1969700). For SAP HANA optimized DSOs you can check SAP Note 1646723 for DSO activation parameters. A DSO activation performs many changes, so it can suffer significantly from bottlenecks in the redo log and system replication area. See SAP Notes 1999930 for redo log I/O analysis and 1999880 ("Can problems with system replication impact the performance on the primary system?") for more details. Apart from this you can also apply optimizations on BW side, e.g. smaller requests or smaller keys. |
various |
SELECT | EABL, EABLG |
If joins on tables EABL and EABLG are expensive without apparent reasons (like inadequate index design) you can check for the following optimizations:
|
554c18cc7e6018abd21c69429b273f0b |
SELECT FOR UPDATE | FKKDIHDTMP |
If SELECT FOR UPDATE requests on table FKKDIHDTMP suffer from exclusive lock waits (SAP Note 1999998) you should check if a reduction of mass activity intervals can improve the situation (SAP Note 2177798). |
7c541b915a48ae457956cda4e9702780 83ac4bf74da990133f1c525d05f43714 db2a5d8b668a837677bb6946de2a8d76 |
INSERT CALL |
HOST_BLOCKED_TRANSACTIONS_BASE COLLECTOR_HOST_BLOCKED_TRANSACTIONS |
See SAP Note 2147247 and consider reducing the execution frequency for the statistics server actions related to blocked transactions (Collector_Host_Blocked_Transactions, Alert_Blocked_Transactions), e.g. to 300 seconds:UPDATE _SYS_STATISTICS.STATISTICS_SCHEDULE SET INTERVALLENGTH = 300 WHERE ID IN (49, 5020) |
b8b6f286b1ed1ef2e003a26c3e8e3c73 cf2e9b374514550f2b2e522df9f619ec 2afa9311f17e325d6d1418b3dd3eb388 2c7032c1db3d02465b5b025642d609e0 5ec9ba31bee68e09adb2eca981c03d43 5f42d3d4c911e0b34a7c60e3de8a70d2 |
SELECT |
M_BACKUP_CATALOG_FILES M_BACKUP_CATALOG |
These SELECTs are regularly issued by the backup console in SAP HANA Studio. In order to minimize the impact on the system you should open the backup console only when required and not permanently. If you repeatedly need to open the backup console, you can increase the refresh interval (default: 3 seconds): Check if the backup catalog is unnecessarily large and delete old catalog entries if possible using: BACKUP CATALOG DELETE ALL BEFORE BACKUP_ID ... |
bb45c65f7cc79e50dbdf6bbb3c1bf67e b0f95b51bcece71350ba40ce18832179 |
SELECT | M_CONNECTIONS |
This SELECT is executed when a SAP ABAP work process connects to SAP HANA. Implement a sufficiently new DBSL patch level according to SAP Note 2207349 in order to replace this query with a more efficient approach based on session variables. |
fcdf8f8383886aaf02c3a45e27fbd5f2 | SELECT | M_CONNECTIONS M_ACTIVE_STATEMENTS |
This SELECT is executed if performance monitoring is scheduled via transaction /SDF/MON and the checkbox "SQL statements" is flagged. Disable this checkbox if you want to eliminate the load introduced by this statement hash. |
various, e.g.: d952da8472c0d4350559843cbfd48d0e |
SELECT | M_SERVICE_REPLICATION |
SELECTs to M_SERVICE_REPLICATION read some data from the secondary system replication site. They can suffer from network problems or unavailabilities related to the secondary system. See SAP Note 1999880 and make sure that system replication is established properly. Check according to SAP Note 2222200 if the network configuration of the primary and secondary SAP HANA system replication site works properly. As a workaround in case of an unavailable secondary system you can disable the regulary secondary system check temporarily (SAP HANA Rev. >= 102.03 and >= 111): global.ini -> [system_replication] -> check_secondary_active_status = 'false' |
09214fa9f6a8b5322d329b156c86313b 28996bd0243d3b4649fcb713f43a45d7 7439e3c53e84b9f849c61d672da8cf79 84e1f3afbcb91b8e02b80546ee57c537 9ce26ae22be3174ceaffcfeee3fdd9b7 f32de4a673809ad96393ac60dfc41347 |
SELECT UPDATE |
M_STATISTICS_LASTVALUES STATISTICS_LASTVALUES |
This SELECT is executed by the Solution Manager in order to read history data from the statistics server. It is particularly critical if the standalone statistics server is used. The UPDATE is executed by the standalone statistics server in order to update statistical information. Implement SAP Note 2147247 ("How can the memory requirements of the statistics server be minimized?"), so that the collection and processing of statistical data is optimized. A switch to the embedded statistics server is typically already sufficient to resolve this issue. |
0ad1a4c1c1a5844d01595f5b3cdc2977 | SELECT | M_TRACEFILE_CONTENTS | This SELECT is regularly issued by the backup console in SAP HANA Studio. In order to minimize the impact on the system you should open the backup console only when required and not permanently. |
6b1d10732401fe82992d93fa91f4ae86 | SELECT FOR UPDATE | NRIV |
Check if you can reduce the critical time frame between SELECT FOR UPDATE on NRIV and the next COMMIT. Identify the involved number range objects from application perspective and optimize their number range buffering.
Make sure that the table NRIV is located in row store, because the frequent UPDATE operations can be handled more efficiently in row store compared to column store.
Be aware that bad performance for the SELECT FOR UPDATE on NRIV can also be a consequence of some underlying general issues, so if it happens occasionally and in combination with some other issues, it can be a symptom for another problem and not an issue itself.
|
32139366d69f10543bc7abfc605b37a2 | DELETE | /PLMB/FRW_BUFFER |
This DELETE can suffer from exclusive record lock waits due to an inappropriate application COMMIT strategy. See SAP Note 2236233 for a coding correction.
|
a77c6081dd733bd4641d4f42205f6c84 | SELECT FOR UPDATE | QIWKTAB |
This SELECT FOR UPDATE can suffer from row store garbage collection activities if the QRFC scheduler issues a very high amount of updates on a few records of this table. This typically happens if there are STOP entries in transaction SMQ2 that never can be processed, but result in a looping scheduler check. Go to transaction SMQ2 and check if there are STOP entries in any client which can be removed. After cleanup the update frequency should significantly reduce and the row store garbage collection is able to keep up without problems. See SAP Note 2169283 for more information related to garbage collection. SAP Note 2125972 provides an application correction to reduce the amount of updates on QIWKTAB. |
efd74759c6dc35f70ccd3529b62588f3 | SELECT | RSBERRORLOG |
The runtime is high because of the ORDER BY clause that can't be supported by the primary index with SAP HANA. See SAP Note 2117500 and implement the coding correction or upgrade to a newer SAP BW support package level to eliminate this ORDER BY.
|
1d2893314f809f3e491e5628f6683ea5 3ff7bab8b9778b77b821816c3a147f81 6662b470dc055adfdd4c70952de9d239 93db9a151f16022e99d6c1c7694a83b0 c706c6fb087b78f6d07b5ae0848179a3 ccbffaf1607aab010cbd4512626e10fd d740f122ae4c4ae3664a09482c249947 |
SELECT | RSDDSTATEVDATA |
Consider reducing the execution frequency of the related Solution Manager extraction “WORKLOAD ANALYSIS (BI DATA)”.
|
07a4169722d9848d8150f53037f3a6fe | UPDATE | RSDD_TMPNM_ADM |
Table RSDD_TMPNM_ADM contains counters for temporary BW table names. Only a few records are updated frequently. In row store this can result in garbage collection related performance issues (see SAP Note 2169283). Therefore it is recommended (and also default with newer SAP ABAP releases) that RSDD_TMPNM_ADM is located in column store. You can move the table to column store using the following command:
ALTER TABLE RSDD_TMPNM_ADM COLUMN |
0e30f2cd9cda54594a8c72afbb69d8fd a115fd6e8a4da78f0813cfe35ffc6d42 00218f32b03b44529da647b530bf4e6d 8a8ec2baa7873e835da66583aa0caba2 b51046ad3685e1d45034c4ba78293fd8 ec91f8ecc5030996f1e15374f16749a8 f0a4785a6ada81a04e600dc2b06f0e49 52063e6acb3492518389d230490492f7 a3df1aa41b220869d74683a8fe9b1f98 c3a9a6620824cd0c931ae3a0e88942c6 8d87d6a303389d8a3f2c72639910da49 4fe60bbfbfa2979b89e8da75f5b2aac7 ca8758b296bd2321306f4fee7335cce5 2aeb4f7ffd47da8917a03f15a57f411a |
SELECT SELECT SELECT SELECT SELECT SELECT SELECT |
RSICCONT RSSELDONE RSMONICDP RSSTATMANPART RSSTATMANREQMAP RSSTATMANSTATUS TESTDATRNRPART0 |
If theses queries return a significant amount of records (> 100), it might be caused by a high number of existing requests. In this case you can check the following:
|
8437943036a2a9fd82b290b2c6eafce7 | SELECT | RSR_CACHE_FFB RSR_CACHE_VARSH |
See SAP Note 706478 and check if you can clean up old entries in RSR_CACHE tables. |
753769700d4435edcbc8cd8ecaa2a6fc | CALL | SHARED_FOLLOW_UP_ACTIONS |
This call is executed at the end of every statistics server alert check (independent of the fact if actually an alert was triggered) in order to schedule follow-up actions like creating a runtime dump or activating traces. It is normal to see a certain load coming from this SQL statement. Runtimes of up to 20 ms are normal. The number of executions can theoretically be influenced by adjusting alert check intervals as described in SAP Note 2147247 ("How can the statistics server check intervals be adjusted?"), but normally not required. |
7488ecfdc727226c7bad7e99ebe21b25 e5332b10f3a1a4215728857efc0f8eda |
SELECT | SOURCE_ALERT_65_BACKUP_CATALOG |
This SELECT is linked to the statistics server action Alert_Backup_Long_Log_Backup. See SAP Note 2147247 ("How can the runtime and CPU requirements of the statistics server actions be analyzed and optimized?" -> "Alert_Backup_Long_Log_Backup") for details about optimizations. |
5ef81b3843efbe961b19fdd79c2bd86b a3bbe3e5447bc42eb3f0ece820585294 |
CALL UPDATE |
STATISTICS_PREPARE_CALL_TIMER STATISTICS_SCHEDULE |
The UPDATE on STATISTICS_SCHEDULE is executed from within the procedure STATISTICS_PREPARE_CALL_TIMER. High runtimes in combination with record locks (ConditionalVariable Wait) on table STATISTICS_SCHEDULE can be caused by inefficient COMMIT handling. This problem is resolved as of Rev. 101. These calls are executed at the beginning of every statistics server check. It is normal to see a certain load coming from this SQL statement. Runtimes of up to 20 ms are normal. The number of executions can theoretically be influenced by adjusting alert check intervals as described in SAP Note 2147247 ("How can the statistics server check intervals be adjusted?"), but normally not required. |
ac08e80aa8512f9669eaf56dedc02202 c6c59150977220ea4fdea412cac7d1ce 0800928edd9d43764935a4e02abbfa15 16c226e204b54efc280091df267152fd 2b7fee2d2b95e792d3890051cbb97ec9 55bb62c4b9ff3f0b32de3a4df248e18c |
SELECT SELECT |
STATISTICS_CURRENT_ALERTS STATISTICS_LAST_CHECKS |
See SAP Notes 2147247 ("How can the runtime and CPU requirements of the statistics server actions be analyzed and optimized?" -> STATISTICS_ALERTS_BASE) and make sure that the size of the underlying table STATISTICS_ALERTS_BASE remains on a reasonable level. |
d6fd6678833f9a2e25e7b53239c50e9a | CALL | STATISTICS_SCHEDULABLEWRAPPER |
This procedure is a wrapper for all embedded statistics server actions like history collections and alert checks. This statement hash typically indicates that it is called by the statistics server based on the configured check time intervals. See SAP Note 2147247 ("How can the runtime and CPU requirements of the statistics server actions be analyzed and optimized?") for more information how to analyze and optimize the actions. |
dc571bf5eb7cad9c313c20de904ab709 | CALL | STATISTICS_SCHEDULABLEWRAPPER |
This procedure is a wrapper for all embedded statistics server actions like history collections and alert checks. This statement hash typically indicates that it is called by SAP Solution Manager in order to extract history and alert information from SAP HANA. See SAP Note 2147247 ("How can the runtime and CPU requirements of the statistics server actions be analyzed and optimized?") for more information how to analyze and optimize the actions. |
deefdc73509bee2c6e78103e6ba5a44c | SELECT | STXH |
The performance can suffer from a wrong query optimizer decision. As a workaround you can create an additional index on columns MANDT and TDOBJECT. |
97171744726a50e248699f758de23444 ba8161b6133633e2984828ce38aa669a 7f55e72c22694541b531e83d8bab8557 d24d8b8d00dd8b8df3e13dd9fb17b5f9 2b459e0d42037fe4d6880b238018a6f7 70efe3b4e470438543e6ecec418e4f02 905dbaa93a672b087c6f226bc283431d |
CALL CALL SELECT SELECT |
ALERT_DEC_EXTRACTOR_STATUS COLLECTOR_GLOBAL_DEC_EXTRACTOR_STATUS GLOBAL_DEC_EXTRACTOR_STATUS_BASE TABLES |
This CALL and SELECT is executed by the statistics server under the _SYS_STATISTICS user. It is linked to DXC (see M_EXTRACTORS) and looks for tables with names like '/BIC/A%AO'. Tables following this naming convention are technical tables to control the DXC DSO activation process. If the CATALOG READ privilege is not granted to _SYS_STATISTICS, the query can take quite long and return limited results. You can massively improve the runtime by granting CATALOG READ to _SYS_STATISTICS. |
371d616387247f2101fde31ed537dec0 aaff34b83d10188d08109cb52a0069ae 3f96e3762412e8d4cf4c670f2993aa8a 51bffaafaddd0ec825ecb9f5bf1b5615 5d0b4a21b0c17c08a536bec04f2825cd a3f19325031503e9e4088a8e57515cd3 1cbbe08ef2bf075daecb9e383ae74deb a5539c73611c1d0ba9e4a5df719329b8 a382ffaeafac914c422752ab9c2f9eab 81d83c08828b792684d4fd4d2580038e 1ce978ccb4cfed10e1ef12dc554c2273 54c8e15427f41b9074b4f324fdb07ee9 da7ef0fee69db516f1e048217bca39e7 c197df197e4530b3eb0dcc1367e5ba4b |
DELETE DELETE DELETE DELETE DELETE DELETE DELETE INSERT INSERT INSERT |
TESTDATRNRPART0 TESTDATRNRPART1 TESTDATRNRPART2 TESTDATRNRPART3 TESTDATRNRPART4 TESTDATRNRPART5 TESTDATRNRPART6 TESTDATRNRPART0 TESTDATRNRPART1 TESTDATRNRPART2 |
SAP Note 1964024 provides a correction that reduces the DML operations on the TESTDATRNRPART<id> tables for all types of databases. The optimization is available as of BW support packages SAPKW73012, SAPKW73112 and SAPKW74007. Also make sure that no data targets exist with a high number of requests. See SAP Note 2037093 for more information. |
e1cdd703df87fc61ce8163fa107162a9 |
CALL |
TREXviaDBSL |
TREXviaDBSL is used to execute queries that are not possible with plain SQL (e.g. BW queries with execution modes > 0). In the SAP HANA SQL cache both the content and the bind values are hidden, so a direct analysis and optimization is not possible. SAP Note 2125337 provides a coding correction that allows to trace bind values via an expensive statements trace. See "What are typical approaches to tune expensive SQL statements?" -> "Long runtime of TREXviaDBSL calls" for more information. |
3e3b8e11cad214e9a7a5cec6193ad270 |
UPDATE |
USR02 |
This UPDATE is frequently executed in SAP ABAP environments in order to update user specific information. Long runtimes are typically not caused by the statement itself, but by general infrastructure or SAP HANA issues. See SAP Note 2000000 and check for general performance issues (e.g. high CPU consumption, disk I/O issues or network problems). |
14. Is it required to create optimizer statistics in order to support optimal execution plans?
it is not necessary to create optimizer statistics for normal SAP HANA tables. SAP HANA determines optimal execution plans by certain heuristics (e.g. based on unique indexes and constraints), by ad-hoc sampling of data or by internally collecting and re-using statistical information.Remote tables accessed via Smart Data Access (SDA, SAP Note 2180119) can take advantage of optimizer statistics. They can be created with the CREATE STATISTICS command and displayed via SQL: "HANA_SQL_Statistics" (SAP Note 1969700).
15. Are all database operations recorded in the SQL cache (M_SQL_PLAN_CACHE)?
Standard operations like SELECT, INSERT, UPDATE or DELETE are recorded, but some more specific DML (e.g. TRUNCATE) and DDL operations (e.g. ALTER) are not contained.Starting with Revision 74 also accesses from stored procedures and SQLScript to temporary tables are no longer stored in the SQL cache (see SAP Note 2003736, "Changed the implementation of SQL queries...").
16. Can sorting be supported by an appropriate index?
No, due to the technical structure it is not possible that a sort operation is supported by an index.17. Is it possible to capture bind values of prepared SQL statements?
Often SQL statements are prepared with bind variables in order to avoid frequent reparses. Rather than literals you will then see question marks in the WHERE clause, e.g.:SELECT * FROM "A004" WHERE "MANDT" = ? AND "KNUMH" = ? AND "KAPPL" = ?In order to understand selectivities and correlation and to be able to reproduce a problem with a SQL statement it is important to know which actual bind values are used. This information can be determined based on specific traces (e.g. the ST05 SQL trace in ABAP environments).
On SAP HANA side the expensive statement trace captures bind values which can e.g. be evaluated via SQL: "HANA_SQL_ExpensiveStatements_BindValues" (SAP Note 1969700).
Additionally SAP HANA is able to capture the bind values of critical SQL statements in the SQL plan cache per default as of SPS 08. This capturing is controlled by the following parameters:
Parameter | Details |
indexserver.ini -> [sql] -> plan_cache_parameter_enabled |
true: Activate capturing of bind values (for non-LOB columns) for long running SQL statements (default) false: Deactivate capturing of bind values |
indexserver.ini -> [sql] -> plan_cache_parameter_sum_threshold | Minimum threshold for the total execution time of a SQL statement before first set of bind values is captured (in ms, default: 100000) |
indexserver.ini -> [sql] -> plan_cache_parameter_threshold | After having captured the first set of bind values for a certain SQL statement, it will capture further sets of bind values if the single execution time exceeds the parameter value (in ms, default: 100) and is higher than the single execution time of the previously captured bind values. |
indexserver.ini -> [sql] -> plan_cache_parameter_for_lob_enabled |
true: Activate capturing of bind values for LOB columns for long running SQL statements, can result in signifcant data volume false: Deactivate capturing of bind values for LOB columns (default) |
18. How can the performance of INSERTs and data loads be tuned?
INSERTs and data loads write new data, while other operations like SELECT, UPDATE or DELETE have to work on existing data. Therefore typical performance factors are partially different. If you want to improve the performance of INSERTs and data loads, you can consider the following areas:Area | Details |
Lock waits |
See SAP Note 1999998 and optimize lock wait situations if required. Typical situation when an INSERT has to wait for a lock are:
|
Columns |
During an INSERT every column has to be maintained individually, so the INSERT time significantly depends on the number of table columns. |
Indexes | Every existing index slows down an INSERT operation. Check if you can reduce the number of indexes during mass INSERTs and data loads. SAP BW provides possibilities to automatically drop and recreate indexes during data loads. Primary index normally mustn't be dropped. |
Bulk load | If a high number of records is loaded, you shouldn't perform an INSERT for every individual record. Instead you should take advantage of bulk loading options (i.e. inserting multiple records with a single INSERT operation) whenever possible. |
Parallelism | If a high number of records is loaded, you should consider parallelism on client side, so that multiple connections to SAP HANA are used to load the data. |
Commits | Make sure that a COMMIT is executed on a regular basis when mass INSERTs are done (e.g after each bulk of a bulk load). |
Delta merge |
Usually it is of advantage to disable auto merges when a mass load is performed. See SAP Note 2057046 for more details. An extremely large delta storage can reduce the load performance and increase the memory footprint, so it is usually good to trigger individual delta merges when really large loads are done, e.g. after every 100 million records. Avoid repeated merges of small delta storages or with a high amount of uncommitted data in order to avoid unnecessary overhead. |
Constellation | Typical throughput |
Problem situations like long critical savepoint phases or other locks | < 500 records / second |
Normal, sequential single-row INSERTs | 1,000 - 10,000 records / second |
Highly parallelized bulk loads | 1,000,000 records / second |
19. Why are there significant differences between SQL statements on ABAP and SAP HANA side?
The ABAP Open SQL statements are transferred to the database via the database interface (DBI). In many cases, the statement is modified in the DBI before being sent to the database. Typical adjustments are:- If a column with an empty variable is compared in the WHERE section, the DBI omits this condition.
- When you use FOR ALL ENTRIES, the program splits a value list depending on the DBI parameters described in SAP Note 1987132 (particularly rsdb/max_blocking_factor, rsdb/max_in_blocking_factor) into statements with short IN lists or OR linkages.
- If the FOR ALL ENTRIES list is empty, all data is generally selected from the current client ("SELECT ... FROM <table> WHERE MANDT = ?"). All conditions in the WHERE part are ignored.
- Starting with kernel 7.42 a DISTINCT is generated in any of the following situations:
- Fast data access is used (rsdb/prefer_join_with_fda = 1, see SAP Note 1987132) and no LOB columns are selected
- The whole FOR ALL ENTRIES list can be satisfied with a single database selection and no LOB columns are selected
- Accesses to SAP pool and cluster tables are converted into accesses on the related table pools or table clusters. In many cases these pool and cluster tables are already transparent tables with SAP HANA.
- If tables are completely or generically buffered on SAP side, the buffers are reloaded, if necessary, with special DBI statements (for example, "SELECT * FROM <table> WHERE MANDT = ? ORDER BY <primary_key_fields>" for completely buffered tables) that may be completely different to the statement from the ABAP source code.
- Some operations (for example, kernel procedures, bulk operations, generations) can generate SQL statements although there is no regular SQL statement in the ABAP source code.
- Also when you use conversion exits, there can be significant differences between ABAP source code and database-side statements (such as additional conditions that are not explicitly present in the ABAP source code).
- IN conditions from the ABAP source code may be converted into any number of possible conditions on the database level, depending on your selection criteria: "=", "IN", "LIKE", "BETWEEN", ">", "<", ">=", "<="
- Columns that appear in both the selection list and the WHERE condition are removed from the selection list if it is clear from the WHERE condition what the column's value must be.
- If an expression ends with a space followed by a placeholder, the system generates an OR concatenation as follows:
Statement after DBI transformation: ... WHERE (<column> LIKE '<string> %' OR <column> LIKE '<string>')
20. How does the EXPLAIN functionality work?
The following EXPLAIN functionalities exist:Command | Rev. | Details |
EXPLAIN PLAN FOR <sql_statement> |
|
Generation of explain plan for explicitly specified SQL statement <sql_statement> |
EXPLAIN PLAN FOR SQL PLAN CACHE ENTRY <plan_id> |
>= 90 |
Generation of explain plan for SQL statement with PLAN_ID <plan_id> in M_SQL_PLAN_CACHE This option is helpful to understand why a previously recorded SQL statement shows a different performance than current executions. |
In order to identify the proper entries in EXPLAIN_PLAN_TABLE, you can use "EXPLAIN PLAN SET STATEMENT_NAME = '<statement_name>' ..." when generating the explain plan. Then you can check EXPLAIN_PLAN_TABLE for entries related to <statement_name>.
Keywords
HANA SQL statement tuning optimization request
Header Data
Released On | 28.01.2016 13:46:33 |
Release Status | Released to Customer |
Component | HAN-DB SAP HANA Database |
Priority | Normal |
Category | How To |
No comments:
Post a Comment