Symptom
You have questions related to the SAP HANA memory.
You experience a high memory utilization or out of memory dumps.
You experience a high memory utilization or out of memory dumps.
Environment
SAP HANA
Cause
1. Which indications exist for SAP HANA memory problems?
2. How can I collect information about the current SAP HANA memory consumption?
3. How can I collect information about the historic SAP HANA memory consumption?
4. Which important memory areas exist?
5. What does SAP HANA do if memory becomes scarce?
6. Which parameters can be used to limit the SAP HANA memory consumption?
7. How can I analyze problems related to the SAP HANA memory consumption?
8. Is it possible to extend the physical memory of a SAP HANA machine?
9. Which options exist to reduce the risk of SAP HANA memory issues?
10. How can I judge if the available memory is sufficient for the current system and a projected future growth?
11. Is it possible to monitor the memory consumption of SQL statements?
12. Is it possible to limit the memory that can be allocated by a single SQL statement?
13. What can I do if a certain heap allocator is unusually large?
14. How can I identify how a particular heap allocator is populated?
15. How often are OOM dumps written?
16. Where can I find more information regarding SAP HANA memory consumption?
17. How can the resident memory be smaller than the allocated memory?
18. What are typical reasons for significant size differences in memory vs. on disk?
19. Which general optimizations exist for reducing the SQL statement memory requirements?
20. How can the tables with the highest memory consumption be determined?
21. How much swap space should be configured for SAP HANA hosts?
22. How can I trigger a memory garbage collection?
23. Why do I get an OOM although the SAP HANA allocation limits aren't reached?
24. How can I involve SAP to perform a detailed memory check?
25. Why is the allocated memory in some heap allocators very large?
26. Why does the delta storage allocate more memory with SAP HANA SPS >= 09?
2. How can I collect information about the current SAP HANA memory consumption?
3. How can I collect information about the historic SAP HANA memory consumption?
4. Which important memory areas exist?
5. What does SAP HANA do if memory becomes scarce?
6. Which parameters can be used to limit the SAP HANA memory consumption?
7. How can I analyze problems related to the SAP HANA memory consumption?
8. Is it possible to extend the physical memory of a SAP HANA machine?
9. Which options exist to reduce the risk of SAP HANA memory issues?
10. How can I judge if the available memory is sufficient for the current system and a projected future growth?
11. Is it possible to monitor the memory consumption of SQL statements?
12. Is it possible to limit the memory that can be allocated by a single SQL statement?
13. What can I do if a certain heap allocator is unusually large?
14. How can I identify how a particular heap allocator is populated?
15. How often are OOM dumps written?
16. Where can I find more information regarding SAP HANA memory consumption?
17. How can the resident memory be smaller than the allocated memory?
18. What are typical reasons for significant size differences in memory vs. on disk?
19. Which general optimizations exist for reducing the SQL statement memory requirements?
20. How can the tables with the highest memory consumption be determined?
21. How much swap space should be configured for SAP HANA hosts?
22. How can I trigger a memory garbage collection?
23. Why do I get an OOM although the SAP HANA allocation limits aren't reached?
24. How can I involve SAP to perform a detailed memory check?
25. Why is the allocated memory in some heap allocators very large?
26. Why does the delta storage allocate more memory with SAP HANA SPS >= 09?
Resolution
1. Which indications exist for SAP HANA memory problems?
Tracefiles with the following naming convention are created:<service>_<host>.<port>.rtedump.<timestamp>.oom.trc
<service>_<host>.<port>.rtedump.<timestamp>.oom_memory_release.trc
<service>_<host>.<port>.rtedump.<timestamp>.compositelimit_oom.trc
<service>_<host>.<port>.rtedump.<timestamp>.after_oom_cleanup.trc
<service>_<host>.<port>.emergencydump.<timestamp>.trc (if memory related errors like "allocation failed" are responsible)
The following error messages can indicate OOM situations. Be aware that some of the errors can also be issued in other scenarios. To make sure that they are really memory related, you have to check the related trace file.
SQL error -10108: Session has been reconnected
SQL error 129 while accessing table <table_name>
transaction rolled back by an internal error: exception during deltalog replay. transaction rolled back by an internal error: TableUpdate failed transaction rolled back by an internal error: exception 1000002: Allocation failed ; $size$=1191936; $name$=TableUpdate; $type$=pool; $inuse_count$=2180; $allocated_size$=8180736; $alignment$=16# transaction rolled back by an internal error: TrexUpdate failed on table <table_name> with error: commitOptimizeAttributes() failed with rc=6900, Attribute engine failed;object=<object_name>$delta_1$en, rc=6900 - enforce TX rollback transaction rolled back by an internal error: TrexUpdate failed on table '<table_name>' with error: Attribute load failed;index=<table_name>en,attribute='$trexexternalkey$' (207), rc=6923 - enforce TX rollback transaction rolled back by an internal error: TrexUpdate failed on table '<table_name>' with error: AttributeEngine: not enough memory, rc=6952 - enforce TX rollback
SQL error 403 while accessing table <table_name>
internal error: "<schema_name>"."<table_name>": [133] (range 2)
SQL error 2048 while accessing table <table_name>
column store error: search table error: [2] message not found column store error: search table error: [9] Memory allocation failed column store error: search table error: [1999] general error (no further information available) column store error: search table error: [2575] flatten scenario failed; Allocation failed column store error: search table error: [6900] Attribute engine failed column store error: search table error: [6923] Attribute load failed column store error: search table error: [6952] Error during optimizer search column store error: search table error: [6952] AttributeEngine: not enough memory column store error: [2450] error during merge of delta index occurred column store error: [6924] Attribute save failed
SQL error 3584 while accessing table distributed SQL error: [9] Memory allocation failed distributed SQL error: [2617] executor: plan operation execution failed with an exception
SQL error 3587 at CON invalid protocol or service shutdown during distributed query execution: [2613] executor: communication problem plan <plan> failed with rc 9: Error executing physical plan: Memory allocation failedDelta merges (SAP Note 2057046) fail with the following error:
2048 column store error: [2484] not enough memory for table optimizationThe following SAP HANA alerts indicate problems in the memory area:
Alert | Name | SAP Note | Description |
1 | Host physical memory usage | 1898317 | Determines what percentage of total physical memory available on the host is used. All processes consuming memory are considered, including non-SAP HANA processes. |
12 | Memory usage of name server | 1977101 | Determines what percentage of allocated shared memory is being used by the name server on a host. |
40 | Total memory usage of column store tables | 1977268 | Determines what percentage of the effective allocation limit is being consumed by individual column-store tables as a whole (that is, the cumulative size of all of a table's columns and internal structures). |
43 | Memory usage of services | 1900257 | Determines what percentage of its effective allocation limit a service is using. |
44 | Licensed memory usage | 1899511 | Determines what percentage of licensed memory is used. |
45 | Memory usage of main storage of column store tables | 1977269 | Determines what percentage of the effective allocation limit is being consumed by the main storage of individual column-store tables. |
55 | Columnstore unloads | 1977207 | Determines how many columns in columnstore tables have been unloaded from memory. This can indicate performance issues. |
64 | Total memory usage of table-based audit log | 2081869 | Determines what percentage of the effective allocation limit is being consumed by the database table used for table-based audit logging. |
68 | Total memory usage of row store | 2050579 | Determines the current memory size of a row store used by a service. |
Check ID | Details |
230 | Current memory utilization (%) |
231 | Time since memory utilization > 95 % (h) |
240 | Current swap utilization (GB) |
241 | Time since swap utilization > 1 GB (h) |
245 | Swap space size (GB) |
410 | Current allocation limit used (%) |
411 | Current allocation limit used by tables (%) |
413 | Time since allocation limit used > 80 % (h) |
415 | Curr. max. service allocation limit used (%) |
417 | Time since service alloc. limit used > 80 % (h) |
420 | Heap areas currently larger than 50 GB |
421 | Heap areas larger than 100 GB (last day) |
422 | Heap areas larger than 200 GB (history) |
425 | Pool/RowEngine/CpbTree leak size (GB) |
430 | Number of column store unloads (last day) |
431 | Time since last column store unload (days) |
440 | Shared memory utilization of nameserver (%) |
450 | Tables with memory LOBs > 2 GB |
453 | Size of non-unique concat attributes (GB) |
530 | Shared memory row store size (GB) |
645 | Number of OOM tracefiles (last day) |
746 | Histories with primary key |
747 | Number of zero entries in HOST_SQL_PLAN_CACHE |
748 | History of M_CS_UNLOADS collected |
2. How can I collect information about the current SAP HANA memory consumption?
SAP Note 1969700 provides the following SQL statements to collect information related to the current SAP HANA memory allocation:
SQL statement name | Description |
SQL: "HANA_Memory_Overview" | Provides information about current memory allocation (including heap, row store, column store, allocation limit and license limit) |
SQL: "HANA_Memory_TopConsumers" | Lists the current top memory consumers (e.g. tables and heap areas) |
If you don't have SQL access (e.g. on the secondary site of a SAP HANA system replication environment), you can use the operating system tool hdbcons (SAP Note 2222218) and 'mm l -S' to display the allocators sorted by the inclusive memory size. Sorting by the more important exclusive size in use is not possible.
3. How can I collect information about the historic SAP HANA memory consumption?
SAP Note 1969700 provides the following SQL statements to collect information related to the historic SAP HANA memory allocation:SQL statement name | Description |
SQL: "HANA_Memory_TopConsumersInHistory" | Lists historic top memory consumers (e.g. tables and heap areas) |
SQL: "HANA_SQL_ExpensiveStatements" | Lists memory consumption of executed SQL statements (SPS 08) Relevant output columns: MEM_USED_GB, MEM_PER_EXEC_GB Both expensive SQL statement trace and statement memory tracking needs to be activated, see "Is it possible to limit the memory that can be allocated by a single SQL statement?" in this SAP Note for more information. |
The following memory areas are most important:
Memory Area | Context | Level | Details |
Physical memory | operating system | global | Total amount of memory physically available on host level (typically RAM) |
Virtual memory | operating system | process | Total amount of memory allocated by a process held both in physical memory and in paging area on disk |
Resident memory | operating system | process | Total amount of memory allocated by a process held in physical memory |
Allocated memory | SAP HANA | process | Total amount of memory allocated by the SAP HANA processes, limited by the configurable SAP HANA global allocation limit Less relevant for SAP HANA memory analysis because allocated, but unused memory can be re-used when required |
Used memory | SAP HANA | process | Total amount of memory in use by the SAP HANA processes, relevant to understand SAP HANA memory footprint |
Shared memory | SAP HANA | global | Memory that can be accessed by different processes, e.g.:
|
Heap memory | SAP HANA | process | Memory exclusively accessible by threads of a single process (e.g. indexserver), e.g.:
|
Code | SAP HANA | global | Code |
Stack | SAP HANA | process | Stack |
- Physical memory > virtual memory
- Virtual memory = resident memory >= allocated memory
- Allocated memory = shared memory + allocated heap memory
- Used memory = shared memory + used heap memory
- Code, stack: Usually negligible sizes
From a memory analysis perspective we can usually focus on the used memory and assume that the allocated memory is released whenever required.
5. What does SAP HANA do if memory becomes scarce?
Unlike other databases (e.g. Oracle: PGA in memory -> PSAPTEMP on disk) SAP HANA doesn't allocate disk space if certain operations require more memory than available. Instead the following actions are taken:- Free memory segments are returned to operating system
- Garbage collection is triggered so that allocated memory is defragmented and freed for re-use.
- Resource container is shrinked:
- Non-critical heap areas are reduced (e.g. the SAP HANA page cache Pool/PersistenceManager/PersistentSpace(0)/DefaultLPA/Page or compiled L code)
- Column store unloads are triggered (SAP Note 2127458); this activity can significantly impact the performance.
- Transactions are terminated with error if their memory requests can no longer be fulfilled.
- An out-of-memory (OOM) dump is written.
6. Which parameters can be used to limit the SAP HANA memory consumption?
The following parameters can be used to limit the overall or process-specific SAP HANA memory allocation:Parameter | Unit | Details |
global.ini -> [memorymanager] -> global_allocation_limit |
MB | This parameter limits the overall memory consumption of SAP HANA. The default value depends on the available physical memory and the SAP HANA revision level:
|
<service>.ini -> [memorymanager] -> allocationlimit |
MB % |
This parameter limits the memory consumption of the related SAP HANA process (<service>). If "%" is specified at the end of the parameter value (without preceeding blank), the value is interpreted as percentage of RAM, otherwise it is interpreted as MB. The standalone statistics server uses a value of "5%" per default. All other services including indexserver use the following allocation limit per default:
|
7. How can I analyze problems related to the SAP HANA memory consumption?
SAP Note 1840954 describes steps to analyze and resolve SAP HANA memory issues.
SAP Note 1984422 describes how to analyze an out of memory (OOM) dump file.
SAP Note 2222718 provides a decision-tree approach for analyzing problems in the SAP HANA memory area.
The SAP HANA Troubleshooting and Peformance Analysis Guide at SAP HANA Troubleshooting and Performance Analysis Guide covers - among others - the analysis of memory related issues.
8. Is it possible to extend the physical memory of a SAP HANA machine?
In general the configured physical memory depends on factors like hardware, scenario and available CPUs and must not be changed. SAP Note 1903576 describes when and how you can apply for an exception.9. Which options exist to reduce the risk of SAP HANA memory issues?
The following options exist to reduce the risk of SAP HANA memory issues:Action / Feature | Details | ||||
Cleanup of basis tables | Make sure that house-keeping is set up for technical, administration and communication tables so that they don't consume unnecessary memory. See SAP Note 706478 for more information. |
||||
Archiving | Implement archiving strategies for business data. See the Archiving section at http://service.sap.com/ilm for more information. |
||||
Hybrid LOBs | Hybrid LOBs are not loaded into memory when the size exceeds a defined limit, so it is usually beneficial for memory consumption if you take advantage of this feature. SAP Note 1994962 describes how columns defined as memory LOBs can be converted to hybrid LOBs. SAP ABAP table columns with LRAW data type are mapped to either LOB or VARBINARY. As VARBINARY always has to be loaded into memory, this can have an effect on the memory utilization. See SAP Note 2220627 ("Is VARBINARY also a LOB Type?") for more information. |
||||
Reduction of number of indexes | Check for indexes with high memory requirements (e.g. using SQL: "HANA_Indexes_LargestIndexes" from SAP Note 1969700) and check if you can drop some of these indexes. A focus can be put in the following areas:
|
||||
Transition from multi-column to single-column indexes | Multi-column indexes require much more memory than single-column indexes, because an additional internal column (CONCAT attribute) needs to be created. Check for indexes with high memory requirements (e.g. using SQL: "HANA_Indexes_LargestIndexes" from SAP Note 1969700) and check if you can redefine some multi-column indexes to single-column indexes. Often it is a good compromise to define an index only on the most selective column. Further columns like MANDT would significantly increase the memory requirements. |
||||
Reduction of CONCAT attributes | CONCAT attributes are specific internal columns that can be created for various reasons. Some of them may no longer be required. See SAP Note 1986747 for more information. |
||||
Paged attributes | Paged attributes are columns that can be loaded into the memory piece-wise. All columns apart from primary key and internal columns can be defined as paged attributes. For more details see SAP Note 1871386. |
||||
Inverted hash indexes | As of SPS 09 you can reduce the size of multi-column indexes using the INVERTED HASH feature. See SAP Note 2109355 for more information. | ||||
Move large tables to column store | Table data is compressed efficiently in column store, so moving tables from row store to column store usually reduced the memory allocation significantly. Furthermore table columns are only loaded into the column store memory if required and not during startup. Therefore you can check if large tables exist in row store that can be moved to column store. Be aware that tables with a significant amount of modifications can suffer from performance regressions if they are moved to column store. In case of SAP standard tables you should usually double-check with SAP if the move to the column store is an option. |
||||
Analysis of large heap areas | Some heap areas may be larger than required, e.g. due to bugs or inadequate configuration. See question "What can I do if a certain heap allocator is unusually large?" below for more details. | ||||
SQL statement optimization | SQL statements processing large amounts of data or accessing data inefficiently can be responsible for a significant memory growth. See SAP Note 2000002 related to SQL statement optimization. See question "Which general optimizations exist for reducing the SQL statement memory requirements?" below for more information. |
||||
Transactional problems | Long running transactions or idle cursors can impact the garbage collection and result in a high amount of versions or histories. See SAP Note 2169283 for more information about symptoms, analysis steps and resolutions in the area of garbage collection. |
||||
Fragmentation | Fragmentation effects can result in an unnecessary row store size. See SAP Note 1813245 for more information on checking the row store fragmentation and reorganizing the row store. |
||||
Large delta storage | Many records in the delta storage of tables can increase the size of the column store. See SAP Note 2057046 and make sure that delta merges are running properly. | ||||
Column store compression | See SAP Note 2112604 and make sure that the column store tables are compressed optimally. | ||||
Unload configuration | It is possible to influence the unload behavior so that less critical objects are unloaded first ("UNLOAD PRIORITY <level>" setting for tables) or that the overall unloads take place earlier or later dependent on the individual needs (indexserver.ini -> [memoryobjects] -> unload_lower_bound parameter). For more information see SAP Note 2127458. | ||||
Dynamic tiering | Using dynamic tiering you can mark data as hot, warm and cold. Typically only hot data resides in the SAP HANA memory. See SAP Note 2140959 for more information related to dynamic tiering. | ||||
Smart data access | Based on smart data access SAP HANA can retrieve data from tables in external databases (e.g. Sybase, Oracle or SAP HANA). This reduced the need to load all accessed data into SAP HANA. See SAP Note 2180119 for more information regarding smart data access. | ||||
Table distribution | If some hosts in a scale-out scenario suffer from a high memory consumption you can re-locate tables or table partitions from hosts with a high memory consumption to hosts with a lower memory consumption. See section "Table Distribution in SAP HANA" of the SAP HANA Administration Guide for more information. | ||||
Global allocation limit | The following parameter defines the maximum overall memory size which can be allocated by the SAP HANA instance:global.ini -> [memorymanager] -> global_allocation_limitThe default value depends on the available physical memory and the SAP HANA revision level:
|
||||
OS configuration | Make sure that the operating system configuration is in line with the SAP recommendations. See SAP Note 2000003 ("How can the configuration and performance of the SAP HANA hardware, firmware and operating system be checked?") for more information. See also SAP Note 1980196 that discusses OOM errors due to an inadequate setting of the Linux parameter /proc/sys/vm/max_map_count. See SAP Note 2123782 which suggests a pagepool size reduction from 16 GB to 4 GB in Lenovo / GPFS environments. |
||||
SAP HANA patch level | The memory allocation of certain heap areas is SAP HANA patch level dependent. Newer revision levels may include optimizations that reduce the memory allocation. Therefore it is generally useful to make sure that a reasonably new revision level is implemented. | ||||
Scale-out layout | Using fewer hosts with a larger amount of physical memory each will reduce the risk that specific SQL statements with a high memory requirement will result in OOM situations, because there is a larger amount of available memory on each host. So for example 2 hosts with 1 TB memory each would have a lower risk of OOM situations compared to 8 hosts with 256 GB each. | ||||
Statistics server optimizations | See SAP Note 2147247 (-> "How can the memory requirements of the statistics server be minimized?") for details. | ||||
BW DTP delta initialization request optimization | If you face a high memory consumption related to DTP activities in BW, you can check SAP Note 2230080 for possible optimizations. | ||||
Bypassing SAP HANA bugs | Make sure that you are on reasonably new SAP HANA Revision levels and avoid situations that can cause memory related issues due to SAP HANA bugs. Particularly consider the following scenarios:
|
||||
Sizing review | If all above checks didn't help to reduce the OOM situations you should double-check the SAP HANA sizing. See SAP Note 2000003 ("What has to be considered for sizing SAP HANA?") for more information. |
There are some simple rules of thumb available that can help to understand if the memory is properly sized in an existing system, e.g. that memory size should be at least two times the backup size. At this point we won't use these rules but instead describe a more detailed approach based on a real-life SAP Suite on HANA system with 4 TB of physical memory.
In a first step it is important to understand how much memory is allocated by the different main areas. This information is retrieved via SQL: "HANA_Memory_TopConsumers" (AGGREGATE_BY = 'AREA'):
----------------------------------------------- |AREA |SIZE_GB |SIZE_PCT|CUM_SIZE_PCT| ----------------------------------------------- |Column store| 1011.72| 60.55| 60.55| |Heap area | 446.89| 26.74| 87.30| |Row store | 128.77| 7.70| 95.01| |Code | 6.62| 0.39| 95.41| |Stack | 1.58| 0.09| 95.50| -----------------------------------------------We can see that around 1.1 TB are used by the column store, 0.1 TB is used by the row store and additional 0.4 TB are used by heap areas (that are not integral part of other areas). The total memory utilization of SAP HANA is significantly below 2 TB, so we can already conclude that there is a lot of safety margin for exceptional situations and future growth before the 4 TB memory limit is reached.
More detailed information can be determined with SQL: "HANA_Memory_Overview" (SAP Note 1969700). The output for the same system looks like:
--------------------------------------------------------------------------------------------------------------------------------------------------------- |NAME |TOTAL_GB |DETAIL_GB |DETAIL2_GB | --------------------------------------------------------------------------------------------------------------------------------------------------------- |User-defined global allocation limit|not set | | | | | | | | |License memory limit | 4000| | | | | | | | |License usage | 3000| 1554 (2014/03/01-2014/03/31)| | | | | 2873 (2014/04/01-2014/04/30)| | | | | 2849 (2014/05/01-2014/05/31)| | | | | 3000 (2014/06/01-2014/06/27)| | | | | | | |Physical memory | 4040| 4040 (hlahana21) | | | | | | | |HANA instance memory (allocated) | 3450| 3450 (hlahana21) | | | | | | | |HANA instance memory (used) | 1639| 1639 (hlahana21) | | | | | | | |HANA shared memory | 121| 121 (hlahana21) | | | | | | | |HANA heap memory (used) | 1508| 1508 (hlahana21) | 355 (Pool/NameIdMapping/RoDict) | | | | | 192 (Pool/AttributeEngine-IndexVector-Sp-Indirect) | | | | | 105 (Pool/AttributeEngine-IndexVector-Single) | | | | | 102 (Pool/PersistenceManager/PersistentSpace(0)/DefaultLPA/Page)| | | | | 85 (Pool/RowEngine/QueryExecution) | | | | | 73 (Pool/AttributeEngine/idattribute) | | | | | 66 (Pool/Statistics) | | | | | 58 (Pool/AttributeEngine) | | | | | 44 (Pool/AttributeEngine-IndexVector-SingleIndex) | | | | | 38 (Pool/RowEngine/CpbTree) | | | | | | |Column store size | 1011| 1011 (hlahana21) | 315 (KONV) | | | | | 84 (BSEG) | | | | | 42 (ZARIXSD5) | | | | | 36 (VBFA) | | | | | 32 (ZARIXSD2) | | | | | 31 (EDID4) | | | | | 29 (BSIS) | | | | | 28 (CDPOS) | | | | | 25 (ZARIXMM2) | | | | | 18 (KONP) | | | | | | |Row store size | 129| 129 (hlahana21) | 37 (A726) | | | | | 30 (TST03) | | | | | 12 (EDIDS) | | | | | 7 (SRRELROLES) | | | | | 5 (EDIDC) | | | | | 4 (D010TAB) | | | | | 4 (SWNCMONI) | | | | | 3 (/SDF/MON) | | | | | 3 (DD03L) | | | | | 2 (REPOSRC) | | | | | | |Disk size | 1194| 1194 (global) | 320 (KONV) | | | | | 104 (BSEG) | | | | | 42 (ZARIXSD5) | | | | | 36 (VBFA) | | | | | 32 (ZARIXSD2) | | | | | 30 (EDID4) | | | | | 30 (TST03) | | | | | 29 (BSIS) | | | | | 27 (CDPOS) | | | | | 25 (ZARIXMM2) | ---------------------------------------------------------------------------------------------------------------------------------------------------------The heap memory size is reported with 1508 GB which is much more than the 447 GB from further above. The reason is that in the second result list all heap areas are considered, also the ones that are the basis for the column store. This means, most of the 1508 GB heap allocation overlaps with the column store size. The shared memory size of 121 GB overlaps with the row store.
The allocated instance memory of 3450 GB is much higher than the used instance memory of 1639 GB, because SAP HANA tends to keep allocated memory allocated as long as there is no memory shortage. From a sizing perspective the used memory matters.
So also the memory overview output indicates that the used memory is significantly below 2 TB and far away from the 4 TB memory limitation.
A closer look into the top heap areas (SQL: "HANA_Memory_TopConsumers", AREA= 'HEAP') shows the following top allocators for the same system:
------------------------------------------------------------------------------------ |DETAIL |SIZE_GB | ------------------------------------------------------------------------------------ |Pool/PersistenceManager/PersistentSpace(0)/DefaultLPA/Page | 105.70| |Pool/RowEngine/QueryExecution | 84.32| |Pool/Statistics | 65.97| |Pool/JoinEvaluator/TranslationTable | 24.90| ------------------------------------------------------------------------------------The Page allocator being responsible for a memory utilization of 106 GB is a kind of file system buffer that can reduce its size without problems whenever there is a memory shortage. So we can assume that another around 80 GB could be saved if required. This means that the total required memory is 1550 GB.
Conclusion: Even if the used memory size doubles it is still well below the memory limit (3100 GB vs. 4000 GB) and can also handle exceptional situations (e.g. significant growth of certain heap allocators) without running into memory pressure.
It is useful to repeat this analysis from time to time and also check the historic memory utilization (SQL: "HANA_Memory_TopConsumers_History") to get a good understanding of the memory requirements over time.
11. Is it possible to monitor the memory consumption of SQL statements?
You can activate the statement memory tracking feature by setting the following parameters:global.ini -> [resource_tracking] -> enable_tracking = on global.ini -> [resource_tracking] -> memory_tracking = onChanges to both parameters can be done online, no restart is required.
When memory tracking is active, the following memory information is available:
Patch level | Table | Column |
>= Rev. 80 | M_EXPENSIVE_STATEMENTS |
MEMORY_SIZE |
>= Rev. 94 | M_ACTIVE_STATEMENTS M_PREPARED_STATEMENTS |
ALLOCATED_MEMORY_SIZE USED_MEMORY_SIZE AVG_EXECUTION_MEMORY_SIZE MAX_EXECUTION_MEMORY_SIZE MIN_EXECUTION_MEMORY_SIZE TOTAL_EXECUTION_MEMORY_SIZE |
>= Rev. 94 >= Rev. 100 |
M_CONNECTION_STATISTICS M_SQL_PLAN_CACHE |
AVG_EXECUTION_MEMORY_SIZE MAX_EXECUTION_MEMORY_SIZE MIN_EXECUTION_MEMORY_SIZE TOTAL_EXECUTION_MEMORY_SIZE |
Before Rev. 94 the expensive statement trace could only be triggered by runtimes of SQL statements. Starting with Rev. 94 you can use the following parameter to trigger the recording of expensive SQL statements in M_EXPENSIVE_STATEMENTS based on the memory consumption:
global.ini -> [expensive_statement] -> threshold_memory = <bytes>12. Is it possible to limit the memory that can be allocated by a single SQL statement?
Starting with SPS 08 you can limit the memory consumption of single SQL statements. As a prerequisite you need to have the statement memory tracking feature enabled as described above. Additionally you have to set the following parameter in order to define the maximum permitted memory allocation per SQL statement and host:
global.ini -> [memorymanager] -> statement_memory_limit = <maximum_memory_allocation_in_gb>For more details see SAP Note 2222250 ("How can workload management be configured for memory?").
13. What can I do if a certain heap allocator is unusually large?
See SAP Note 1840954 for some general advice.The following table contains allocator-specific recommendations. Normally there is no need to perform manual analysis and optimization, so make sure that you are in a pathologic or critical situation before you consider any changes:
Allocator | Purpose | Analysis Steps | |||||||||
AllocateOnlyAllocator-unlimited/FLA-UL<3145728,1>/MemoryMapLevel2Blocks | Internal memory management | This allocator contains information for managing the SAP HANA memory. Normally no optimization is necessary. A high memory utilization in the system can result in the allocation of smaller memory chunks, which will result in a larger allocator. Starting with SAP HANA SPS 11 first improvements have been made to reduce the memory footprint. Furthermore, the possibility for a more detailed analysis was implemented. If you experience a very high (and possibly rising) memory consumption due to this allocator, you should open a SAP incident on component HAN-DB. |
|||||||||
Pool/AttributeEngine/Delta Pool/AttributeEngine/Delta/BtreeDictionary Pool/AttributeEngine/Delta/Cache Pool/AttributeEngine/Delta/InternalNodes Pool/AttributeEngine/Delta/LeafNodes |
Delta storage components | See SAP Note 2057046 and make sure that delta merges are properly configured and executed, so that the delta storage size of the tables remains on acceptable levels. | |||||||||
Pool/AttributeEngine Pool/AttributeEngine/idattribute Pool/AttributeEngine-IndexVector-BlockIndex Pool/AttributeEngine-IndexVector-BTreeIndex Pool/AttributeEngine-IndexVector-Single Pool/AttributeEngine-IndexVector-SingleIndex Pool/AttributeEngine-IndexVector-Sp-Cluster Pool/AttributeEngine-IndexVector-Sp-Indirect Pool/AttributeEngine-IndexVector-Sp-Prefix Pool/AttributeEngine-IndexVector-Sp-Rle Pool/AttributeEngine-IndexVector-Sp-Sparse Pool/ColumnStoreTables/Delta/BtreeDictionary Pool/ColumnStoreTables/Delta/Btreeindex Pool/ColumnStoreTables/Delta/Cache Pool/ColumnStoreTables/Delta/InternalNodes Pool/ColumnStoreTables/Delta/LeafNodes Pool/ColumnStoreTables/Main/Compressed/Cluster Pool/ColumnStoreTables/Main/Compressed/Indirect Pool/ColumnStoreTables/Main/Compressed/Prefix Pool/ColumnStoreTables/Main/Compressed/Rle Pool/ColumnStoreTables/Main/Compressed/Sparse Pool/ColumnStoreTables/Main/Dictionary/RoDict Pool/ColumnStoreTables/Main/Dictionary/ValueDict Pool/ColumnStoreTables/Main/Index/Block Pool/ColumnStoreTables/Main/Index/Single Pool/ColumnStoreTables/Main/PagedUncompressed Pool/ColumnStoreTables/Main/Rowid Pool/ColumnStoreTables/Main/Text/DocObjects Pool/ColumnStoreTables/Main/Uncompressed Pool/malloc/libhdbbasement.so Pool/NameIdMapping/RoDict |
Column store components | These allocators are responsible for parts of the column store. Their memory allocation will implicitly reduce if you reduce the amount of table data in column store (archiving, cleanup, reduction of indexes, ...) | |||||||||
Pool/BitVector | Basic data structure (e.g. temporary query results, columnar data, transactional info of column tables) | Can be linked to problems with garbage collection in column store, see "Which options exist to reduce the risk of SAP HANA memory issues?" -> "Transactional problems" for details. | |||||||||
Pool/CS_TableSearch | Query optimizer related data structures | See question "Which general optimizations exist for reducing the SQL statement memory requirements?" below in order to make sure that SQL statements are executed as memory-efficient as possible. | |||||||||
Pool/DocidValueArray | Set of rowids and related values in context of join engine | See question "Which general optimizations exist for reducing the SQL statement memory requirements?" below in order to make sure that join SQL statements are executed as memory-efficient as possible. | |||||||||
Pool/FemsCompression/CompositeFemsCompression | FEMS compression | The field element selection (FEMS) compression is used for BW queries with execution modes 2 and 3 in order to reduce the amount of data transferred to the BW OLAP engine within SAP HANA. In some cases FEMS can result in increased memory requirements. See BW on HANA and the Query Execution Mode for more information related to BW query execution modes. As a local workaround you can check if executing the query in question in execution mode 0 is an acceptable alternative. Also execution mode 2 instead of 3 is worth a try, because the underlying FEMS activities are different and may not run the same issues. As a global workaround you can disable FEMS compression in method _GET_TREX_REQ_FLAGS_READ of class CL_RSDRV_TREX_API_STMT by commenting the following line with a leading '*' (see pilot SAP Note 1828751): r_trex_req_flags = r_trex_req_flags + 33554432.As this will lead to disadvantages in other areas (e.g. increasing amount of transmitted data), you should undo this change once you have understood and fixed the reason for the high FEMS related memory consumption. |
|||||||||
Pool/IndexRebuildAllocator | Memory area for row store index rebuilds | This issue can happen with SPS 07 and SPS 08. See SAP Note 2005478 and set the following parameter as a workaround in order to disable row store index rebuilds during startup:indexserver.ini -> [row_engine] -> use_jobex_index_rebuild = false |
|||||||||
Pool/itab Pool/itab/VectorColumn |
Column store (intermediate) search results | See question "Which general optimizations exist for reducing the SQL statement memory requirements?" below in order to make sure that SQL statements are executed as memory-efficient as possible. Pool/itab entries are preserved for SQL statements referenced in the SQL plan cache and the SQL result cache. Therefore you should check if the SQL cache and the result cache are configured properly:
ALTER SYSTEM CLEAR SQL PLAN CACHE ALTER SYSTEM CLEAR COLUMN RESULT CACHEIf the Pool/itab memory allocation remains at a high (and rising) level, it might be caused by a memory leak. In this case you can consider the following options:
|
|||||||||
Pool/JoinEvaluator/DictsAndDocs | Join engine dictionaries | See question "Which general optimizations exist for reducing the SQL statement memory requirements?" below in order to make sure that SQL statements are executed as memory-efficient as possible. Check if inefficient joins of partitioned tables are responsible and reduce or optimize partitioning. This allocator may also grow when an index is created because the join engine takes over some data processing. |
|||||||||
Pool/JoinEvaluator/JECalculate Pool/JoinEvaluator/JECreateNTuple Pool/JoinEvaluator/JEPreAggregate Pool/JoinEvaluator/JEStep1 Pool/JoinEvaluator/JEStep2 Pool/JoinEvaluator/NTuple |
Join engine intermediate data structures | See question "Which general optimizations exist for reducing the SQL statement memory requirements?" below in order to make sure that SQL statements are executed as memory-efficient as possible. If you can't directly identify the SQL statements responsible for the memory growth, you can use SQL: "HANA_Threads_ThreadSamples_FilterAndAggregation" (THREAD_DETAIL = '%(JE%', AGGREGATE_BY = 'HASH, THREAD_DETAIL') available via SAP Note 1969700 to check for SQL statements with a significant processing time in related join engine functions. |
|||||||||
Pool/JoinEvaluator/JEPlanData/deserialized | Join engine intermediate data structures involving inter-node communication | See question "Which general optimizations exist for reducing the SQL statement memory requirements?" below in order to make sure that SQL statements are executed as memory-efficient as possible. Check if the distribution of involved tables across nodes is already optimal or if you can adjust it so that less inter-node data transfer is required. See SAP Note 2081591 for more information about SAP HANA table distribution. |
|||||||||
Pool/JoinEvaluator/JEAssembleResults Pool/JoinEvaluator/JECalculate/Results Pool/JoinEvaluator/JERequestedAttributes/Results |
Join engine results | See question "Which general optimizations exist for reducing the SQL statement memory requirements?" below in order to make sure that SQL statements are executed as memory-efficient as possible. If you can't directly identify the SQL statements responsible for the memory growth, you can use SQL: "HANA_Threads_ThreadSamples_FilterAndAggregation" (THREAD_DETAIL = '%(JE%', AGGREGATE_BY = 'HASH, THREAD_DETAIL') available via SAP Note 1969700 to check for SQL statements with a significant processing time in related join engine functions. This allocator can grow considerably when late materialization isn't used. For some reasons (e.g. bugs described in SAP Note 1975448) the following parameters may be increased, resulting in higher memory requirements: indexserver.ini -> [search] -> late_materialization_threshold indexserver.ini -> [search] -> late_materialization_threshold_for_insertUnset these parameters as soon as you have another solution in place (e.g. a revision level with included bug fix). Other reasons for a high memory consumption are described in the following SAP Notes:
|
|||||||||
Pool/JoinEvaluator/TranslationTable | Join column mapping | See SAP Note 1998599 for more information related to translation tables. | |||||||||
Pool/L/llang/Runtime/Local | Intermediate Llang script results | See question "Which general optimizations exist for reducing the SQL statement memory requirements?" below in order to make sure that SQL statements are executed as memory-efficient as possible. Llang queries may be manually created or results of FOX or SQLScript. The allocator can grow if a large amount of strings or CLOB values is processed. The name of the Llang program can be found in the call stack, e.g. _SYS_PLE:20160126114423_4338930:TMPDATA in the following case: 20: 0x00007fdd4b55b8b1 in ljit::dynamic/_split_main_0+0x3870 at fox/cen_"SAPSR3"."_SYS_PLE:20160126114423_4338930:TMPDATA".cv053_fox_LLangView.56A7F0213FB8EFC7E10000000AAA0052:0 (<unknown>) |
|||||||||
Pool/LVCAllocator/LVCContainerDir Pool/LVCAllocator/LVCContainerDir/LVCContainer_<id> Pool/LVCAllocator/LVCObjectPageDir Pool/LVCAllocator/LVC_ObjectPageDir |
liveCache data | These allocators hold the actual liveCache data and so their sizes should correspond to the amount of liveCache data. | |||||||||
Pool/malloc/libhdbcalcengine.so | Calculation engine intermediate results | See question "Which general optimizations exist for reducing the SQL statement memory requirements?" below in order to make sure that SQL statements are executed as memory-efficient as possible. If you observe a growth of this allocator in combination with calculation engine processing (e.g. TREXviaDBSL, call stack modules like TrexCalculationEngine::Optimizer::optimizeHierarchyJoinOverMultiprovider and TrexCalculationEngine::CombineNonRootAggrOverMpRule::applyAggrOverHierachyJoin), you face a SAP HANA bug that is fixed with SAP HANA Rev. 97.02 and 102. |
|||||||||
Pool/malloc/libhdbcsapi.so | Column store API (search) and intermediate results | See question "Which general optimizations exist for reducing the SQL statement memory requirements?" below in order to make sure that SQL statements are executed as memory-efficient as possible. Additionally you can check for the following specific constellations:
|
|||||||||
Pool/malloc/libhdbcs.so Pool/malloc/libhdbcscommon.so |
Column store components | These allocators are responsible for parts of the column store. Their memory allocation will implicitly reduce if you reduce the amount of table data in column store (archiving, cleanup, reduction of indexes, ...). If you see particularly high values for these allocators which can't be explained by the tables in column store, check the following potential reasons:
|
|||||||||
Pool/malloc/libhdbcsstore.so | Column store persistence objects | This allocator contains administrative column store information (like parts of the row lock information and transaction handling) and may grow in case of many locks or blocked garbage collection. If much memory is allocated by ptime::LinkHash / TrexStore::LockMapEntry*, it can be caused by an infrequent row lock link hashmap garbage collection. As a workaround you can trigger this garbage collection by unloading and reloading tables with a high INSERT / DELETE load. A fix will be provided with an upcoming SPS 11 / SPS 12 Revision. |
|||||||||
Pool/malloc/libhdbcswrapper.so | (Intermediate) results | See question "Which general optimizations exist for reducing the SQL statement memory requirements?" below in order to make sure that SQL statements are executed as memory-efficient as possible. | |||||||||
Pool/malloc/libhdbevaluator.so | Intermediate results | See question "Which general optimizations exist for reducing the SQL statement memory requirements?" below in order to make sure that SQL statements are executed as memory-efficient as possible. Database requests responsible for a growth of this allocator typically show evaluator specific modules like Evaluator::ThreeCode::run in their call stacks. |
|||||||||
Pool/malloc/libhdbitab.so | Intermediate results | See Pool/itab for more information. In general Pool/malloc/libhdbitab.so should be small. If it is large and growing, a memory leak can be responsible with Rev. <= 97.02 and Rev. 100 to 102.00. | |||||||||
Pool/malloc/libhdbrskernel.so | Row store components | This allocator contains all dynamic row store memory allocations which aren't assigned to more specific allocators. With newer revisions the utilization of this allocator should reduce. If most space is allocated by ptime::Proc_insert_parallel::execute, it can be caused by a memory leak bug which is fixed with SPS 08. With SAP HANA Revisions between 90 and 102.02 this allocator can grow due to a bug that can be bypassed by setting the following parameter as a workaround (SAP Note 2253121): indexserver.ini -> [row_engine] -> dynamic_parallel_insert_max_workers = '1'If you experience a large and rising size that can't be explained, open a SAP incident for clarification. |
|||||||||
Pool/malloc/libhdbtableconsistencycheck.so | Table consistency check | This allocator is related to the consistency check procedure CHECK_TABLE_CONSISTENCY (see SAP Note 1977584). You can limit the number of concurrent executions on different tables or run it at times with less concurrent workload in order to reduce the risk of critical memory allocations. | |||||||||
Pool/mdx | MDX query allocations | As of SAP HANA SPS 09 several reasons for a high memory allocation of Pool/mdx are fixed. | |||||||||
Pool/OptimizeCompression/<schema>:_SYS_SPLIT_<table>~<partition> | Compression optimization | Allocators starting with Pool/OptimizeCompression are used during compression optimizations of tables. See SAP Note 2112604 and make sure that compressions area configured in a reasonable way. | |||||||||
Pool/parallel Pool/parallel/aggregates Pool/parallel/align Pool/parallel/compactcol Pool/parallel/ihm Pool/parallel/pop Pool/parallel/temp_aggregates |
OLAP aggregation results | See question "Which general optimizations exist for reducing the SQL statement memory requirements?" below in order to make sure that SQL statements are executed as memory-efficient as possible. If a high memory consumption is caused by the F4 value help mode D in BW environments (7.30 - 7.40), implement the BW correction available via SAP Note 2097025. Consider setting the hint NO_USE_OLAP_PLAN (SAP Note 2142945) for testing purposes in order to check if a switch from OLAP engine to join engine works and results in a reduced memory consumption. If the issue appears with a BW query, check if the problem improves using a different BW query execution mode. See BW on HANA and the Query Execution Mode for more information related to BW query execution modes. If you face a high memory consumption related to DTP activities in BW, you can check SAP Note 2230080 for possible optimizations. |
|||||||||
Pool/PersistenceManager/ContainerFileIDMapping | LOB container mapping | This allocator maps LOB containers to the persistence files. If it is particularly large, the following reasons are possible:
|
|||||||||
Pool/PersistenceManager/DisasterRecoveryPrimary | Asynchronous system replication buffer | The main contributor to the allocator is usually the asynchronous system replication buffer, so it only has a significant size of asynchronous system replication is used and it closely depends on the value of the related parameter:<service>.ini -> [system_replication] -> logshipping_async_buffer_size = <size_in_byte>If you have to increase this buffer, you should only do it for the services with a high redo log generation, typically the indexserver (<service>.ini = indexserver.ini). Setting this parameter in global.ini technically also works, but as a consequence the increased space is allocated multiple times (for all different services), and so memory is wasted. See SAP Note 1999880 for more information related to SAP HANA system replication. |
|||||||||
Pool/PersistenceManager/LogRecovery | Log recovery | This allocator is used to buffer up to four log segments in memory during recovery. The configured log segment sizes can be checked with SQL: "HANA_Logs_LogBuffers" (SAP Note 1969700). In case of a 1 GB log segment size you have to expect a memory allocation of 4 GB during recovery. | |||||||||
Pool/PersistenceManager/PersistentSpace(0)/DefaultLPA/Page | Page cache | The page cache stores blocks retrieved from disk similar to a file system cache. This can e.g. speed up the access to hybrid LOBs (SAP Note 1994962). Space is reclaimed automatically by SAP HANA whenever memory is required, so a large size is not critical. You can check for the content of the allocator in terms of page types by executing "pageaccess a" with hdbcons (SAP Note 2222218). Due to a problem with Revisions 90 to 92 the page cache size can be significantly increased. The allocator stack trace contains DataAccess::GarbageCollectorJob::run as main consumer. Also after an upgrade to Revision 93 or higher the issue may not be completely fixed. because a cleanup is only guaranteed when a delta merge is executed. See SAP Note 2146989 for more information and perform a forced merge on critical tables. If the above scenarios don't apply and you see unloads or OOMs at a time where this allocator is still large, it is likely that the disk I/O peformance is not able to keep up with the data changes. In this case you should check your I/O stack for bottlenecks. See SAP Note 1999930 for more information. |
|||||||||
Pool/PersistenceManager/PersistentSpace(0)/RowStoreLPA/RowStoreSegment | Row store cache (system replication) | This allocator caches row store blocks on the secondary site of a system replication scenario, so that the row store can be created efficiently during failover. Its size is related to the row store size on the primary system. If the secondary site runs into OOM because of this allocator, you have the following options:
|
|||||||||
Pool/PersistenceManager/PersistentSpace(0)/StaticLPA/Page | liveCache pages | This are contains the page cache related to liveCache (if operated as part of SAP HANA). Up to SPS 08 these pages aren't swappable. Starting with SPS 09 the space is reclaimed automatically by SAP HANA whenever memory is required, so a large size is not critical. | |||||||||
Pool/PersistenceManager/UnifiedTable container | L2 delta and paged attribute information | This allocator contains persistence information related to the new delta mechanism used as of SPS 09 (L2 delta) and paged attributes (SAP Note 1871386). Up to SPS 08 delta logs were stored in virtual files instead. The actual delta area in column store remains untouched from this allocator. See SAP Note 2057046 and make sure that delta merges are properly configured and executed, so that the delta storage size of the tables remains on acceptable levels. |
|||||||||
Pool/PlanningEngine/Fox | Dictionaries for FOX formula executions | FOX formula executions by the planning engine may require temporary helper structures that are allocated in Pool/PlanningEngine/Fox. They are dropped after the FOX planning function is finished. If a significant memory allocation - often in combination with Pool/itab - is seen, there may be a loop in the FOX script that has to be corrected. | |||||||||
Pool/PlanningEngine/LookupDict | Master data lookup dictionary of planning engine | You can use SQL: "HANA_Heap_PlanningEngine" (OBJECT_TYPE = 'LOOKUP DICTIONARY') available via SAP Note 1969700 in order to check for the main contributors and the creation times. After a restart of SAP HANA this allocator is empty and re-populated on demand. You can use SQL: "HANA_Heap_PlanningEngine_Cleanup" (SAP Note 1969700) in order to drop no longer required runtime objects. Starting with SPS 10 SAP HANA automatically takes care for the cleanup. |
|||||||||
Pool/QueryMediator | Processing of complex filters | See question "Which general optimizations exist for reducing the SQL statement memory requirements?" below in order to make sure that SQL statements are executed as memory-efficient as possible. Related queries typically have query mediator related modules in their call stacks, e.g.: QueryMediator::FilterProcessor::addFilterAsExpression QueryMediator::FilterTranslation::SearchOperationStarting with SAP HANA SPS 10 an optimization is implemented so that the problem is fixed in many cases. As a workaround you can check if specifying the hint NO_CS_ITAB_IN_SUBQUERY helps to reduce the memory consumption. See SAP Note 2142945 for more information related to SAP HANA hints. |
|||||||||
Pool/RowEngine/CpbTree | Row store indexes | Check via SQL: "HANA_RowStore_TotalIndexSize" (SAP Note 1969700) if the size of the heap allocator is in line with the size of the row store indexes. If it is significantly larger, most likely a memory leak exists that can only be cleaned up by restarting SAP HANA. Upgrade to at least revision 83 in order to eliminate known memory leaks. Due to a bug with Rev. <= 85.03 and Rev. 90 to 94 index garbage collection is not necessarily triggered in time and so this allocator can unnecessarily grow. With Rev. 85.04 and Rev. 95 a fix is delivered. See SAP Note 2169283 for more information related to garbage collection. If the allocator size is in line with the index sizes, check if there are large tables with indexes in row store that can be cleaned (e.g. via SAP Note 706478) or moved to column store. Check via SQL: "HANA_Indexes_LargestIndexes" (SAP Note 1969700) if there are large indexes created on row store tables that are not required and can be dropped. |
|||||||||
Pool/RowEngine/GlobalHeap | Global, unspecific row engine data areas | This allocator is an unspecific allocator for row engine memory. As all significant memory allocation should be assigned to dedicated allocators, Pool/RowEngine/GlobalHeap shouldn't allocate too much memory. Reasons for increased sizes are:
|
|||||||||
Pool/RowEngine/MonitorView | Monitoring view information | This heap allocator contains information of in-memory monitoring views (M_* views). You can display the largest areas within Pool/RowEngine/MonitorView using "mm bl" with hdbcons (SAP Note 2222218) as described further below. In general you have to make sure that less data is collected in the critical monitoring views (e.g. by reducing the trace level). Typical problems are:
global.ini -> [expensive_statement] -> use_in_memory_tracing = false |
|||||||||
Pool/RowEngine/QueryCompilation | Compilation memory | This allocator is required during parsing of database queries. See SAP Note 2124112 for more information related to SAP HANA parsing and check if there are queries with a particular high parse time or unnecessarily complicated queries. | |||||||||
Pool/RowEngine/QueryExecution | Row engine results | See question "Which general optimizations exist for reducing the SQL statement memory requirements?" below in order to make sure that SQL statements are executed as memory-efficient as possible. Additionally you can check the following known SAP HANA bugs resulting in increased sizes of this allocator:
|
|||||||||
Pool/RowEngine/RSTempPage | Temporary row store tables | This allocator holds data related to temporary tables and NO LOGGING in row store. Check why many or large temporary row store tables exist and try to reduce it. Make sure that sessions are closed whenever possible, because this will drop related temporary tables. See SAP Note 2000003 ("What kind of temporary and non-persisted tables can be created with SAP HANA?") for more information related to temporary and NO LOGGING tables. |
|||||||||
Pool/RowEngine/Session | Session management | Check if there is an unusual high number of open connections and eliminate the root cause. | |||||||||
Pool/RowEngine/SQLPlan | SQL cache | The SQL cache can be configured unnecessarily large because underlying issues like a lack of bind variables or varying IN LIST sizes are not recognized. See SAP Note 2124112 and make sure that the SQL cache is not configured larger than required. | |||||||||
Pool/RowEngine/Version | Row store version space | A high number of versions may need to be preserved for read consistency (MVCC) reasons in case of a long running transaction. This increases the size of this allocator. See "Which options exist to reduce the risk of SAP HANA memory issues?" -> "Transactional problems" in this SAP Note for more detailed recommendations. | |||||||||
Pool/SQLScript/Execution | SQL Script runtime information | Check for design problems in the used SQL Script procedures. If you face a high memory consumption with Rev. 100 or 101, a bug can be responsible. Upgrade to Rev. 102 or higher in order to fix it. |
|||||||||
Pool/Statistics | Internal statistical information | You can display the largest areas within Pool/Statistics using "mm bl" with hdbcons (SAP Note 2222218) as described further below. If most space is allocated by MemoryManager::PoolAllocator::PoolAllocator you can check for the most frequent heap allocators using SQL: "HANA_Memory_TopConsumers" (AREA = 'HEAP', ORDER_BY = 'NUM') of SAP Note 1969700. A significant amount of Pool/Statistics memory is required for memory tracking and context memory. Systems with a high amount of database connections (e.g. many SAP application servers and work processes can have up to 100 GB of memory allocation for Pool/Statistics. The following top components are an example for a system with activated memory tracking, around 2000 SQL connections and 3.6 million corresponding records in M_CONTEXT_MEMORY: 44 GB: MemoryManager::PoolAllocator::PoolAllocator (libhdbbasis.so) 14 GB: MemoryManager::MemoryCounter::MemoryCounter (libhdbbasis.so) 12 GB: Execution::ContextAllocator::initImplicitStatementMemoryBooking (libhdbbasis.so) 12 GB: ltt::allocator_statistics::setCompositeLimit (libhdbbasis.so)The following individual reasons for a high Pool/Statistics allocation exist:
|
|||||||||
Pool/StatisticsServer/ThreadManager/Stats::Thread_<num> Pool/StatisticsServer/JobManager/Stats::Thread_<num> Pool/StatisticsServer/JobManager/WriteLastValuesJob Pool/StatisticsServer/LastValuesHolder |
Standalone statistics server | These allocators can become quite large if the standalone statistics server is used and a significant amount of monitoring data is available (e.g. large SQL plan cache, many connections). In order to optimize these allocators please proceed as described at "Which options exist to reduce the risk of SAP HANA memory issues?" -> "Statisticsserver optimizations" above. | |||||||||
Pool/StringContainer | Storage of (uncompressed) strings during column store activities | See question "Which general optimizations exist for reducing the SQL statement memory requirements?" below in order to make sure that SQL statements are executed as memory-efficient as possible. A temporary increase of Pool/StringContainer is possible during processing of large amounts of data, e.g.:
|
|||||||||
Pool/TableConsistencyCheck | Table consistency check | This allocator is related to the consistency check procedure CHECK_TABLE_CONSISTENCY (see SAP Note 1977584). You can limit the number of concurrent executions on different tables or run it at times with less concurrent workload in order to reduce the risk of critical memory allocations. | |||||||||
Pool/TransientMetadataAlloc | Transient metadata | This allocator stores temporary metadata information (object definitions; local on transaction / session level or global). The life time of some data is linked to the SQL cache, so you should check if this cache is defined larger than required (see SAP Note 2124112). | |||||||||
Pool/UdivListMgr/UdivListContainer | MVCC management | This allocator is responsible for managing multi-version concurrency control (MVCC), so the visibility of rows in different transactions. In order to check for problems like long-running transactions you can proceed as described in "Which options exist to optimize the SAP HANA memory utilization?" -> "Transactional problems". |
|||||||||
Pool/ValueArray Pool/ValueArrayColumnDeserialize |
Join engine results | See question "Which general optimizations exist for reducing the SQL statement memory requirements?" below in order to make sure that SQL statements are executed as memory-efficient as possible. These allocators are closely linked to Pool/JoinEvaluator/JERequestedAttributes/Results:
|
You can use the tool hdbcons on operating system level in order to understand better how a heap allocator is filled (SAP Note 2222218). Typical commands are:
Command | Example | Purpose |
help mm | Overview of all memory management (mm) related command options | |
mm bl -t <allocator> | mm bl -t Pool/Statistics | Show top memory contributors ("block list") in <allocator> sorted by used size descending |
mm cg -o <file>.dot <allocator> | mm cg -o callgraph.dot Pool/Statistics | Generate output file with allocator stack trace information for <allocator> |
mm f <allocator> as | mm f Pool/Statistics as | Activation of allocator call stack trace for <allocator> Particularly useful in case of suspected memory leaks so that you can understand from which modules the memory allocations are mainly performed Can result in significant overhead and should only be activated for limited times |
mm f <allocator> as -d | mm f Pool/Statistics as -d | Deactivation of allocator call stack trace for <allocator> |
mm ru | mm ru | Reset all previous measurements ("reset usage") |
mm top -l <num> <allocator> | mm top -l 20 Pool/Statistics | Generate report with top <num> call stacks recorded for <allocator> |
pageaccess a | Provide breakdown of Pool/PersistenceManager/PersistentSpace(0)/DefaultLPA/Page content based on page type, e.g.:ConvIdxPage 256k Temp : 1 (262144 Byte) ConvLeafPage 256k Temp : 130 (34078720 Byte) TidCidMappingPage 256k Short : 1 (262144 Byte) FileIDMappingPage 256k Temp : 172 (45088768 Byte) FileIDMappingPage 256k Short : 2 (524288 Byte) ContainerDirectoryPage 256k Short : 35 (9175040 Byte) ContainerDirectoryPage 256k Long : 2 (524288 Byte) ContainerNameDirectoryPage 256k Long : 1 (262144 Byte) UndoFilePage 64k Short : 707 (46333952 Byte) VirtualFilePage 4k InternalShort : 134 (548864 Byte) VirtualFilePage 16k InternalShort : 57 (933888 Byte) VirtualFilePage 64k InternalShort : 325 (21299200 Byte) VirtualFilePage 256k InternalShort : 196 (51380224 Byte) VirtualFilePage 1M InternalShort : 552 (578813952 Byte) VirtualFilePage 4M InternalShort : 2832 (11878268928 Byte) VirtualFilePage 16M InternalShort : 9458 (158678908928 Byte) VarSizeEntryBasePage 256k Short : 809 (212074496 Byte) ... |
mm bl -t Pool/RowEngine/MonitorView
This output indicates that more than 100 GB of allocator Pool/RowEngine/MonitorView is consumed by the ExpensiveStatementsMonitor and so optimizations like adjustments to the expensive statements trace or implementing a bugfix to resolve a memory leak problem can be considered.
Example 2 (create an allocator call stack trace and extract top 5 call stacks)
mm ru mm f Pool/Statistics as -- Now wait until a representative amount of allocations is captured mm top -l 5 Pool/Statistics mm ru mm f Pool/Statistics as -d15. How often are OOM dumps written?
In case of OOM situations SAP HANA may write a dump, e.g.:
- <service>_<host>.<port>.rtedump.<timestamp>.oom.trc
- <service>_<host>.<port>.rtedump.<timestamp>.after_oom_cleanup.trc
- <service>_<host>.<port>.rtedump.<timestamp>.compositelimit_oom.trc
- <service>_<host>.<port>.rtedump.<timestamp>.oom_memory_release.trc
Not every OOM termination results in an OOM dump because in case of a memory bottleneck many different transactions can run into an OOM error within a short time frame. Per default a SAP HANA service only creates an OOM dump if the last dump was written at least one day ago. This behaviour can sometimes be of disadvantage when two individual OOM situations should be analyzed that happened within less than 24 hours.
In special cases you can reduce the minimum time between two OOM dumps using the following SAP HANA parameter:
global.ini -> [memorymanager] -> oom_dump_time_delta = <min_seconds_between_oom_dumps>If you set the parameter for example to 7200, the minimum interval between two OOM dumps will be two hours (7200 seconds).
16. Where can I find more information regarding SAP HANA memory consumption?
The document SAP HANA Memory Usage Explained provides a good overview of different types of memory in SAP HANA environments.17. How can the resident memory be smaller than the allocated memory?
Normally the allocated memory should be fully contained in the resident memory, nevertheless there are a few exceptions:- If parts of the virtual memory are paged out to disk, the resident memory can be smaller than the allocated memory.
- There are technical constellations where parts of the heap memory and the row store shared memory are marked as used, but not as resident.
18. What are typical reasons for significant size differences in memory vs. on disk?
The allocation of tables in memory and on disk may significantly differ for the following reasons:Reason | Symptom | Details |
No logging tables | Memory > disk | Tables created with the NO LOGGING option are not persisted to disk. See SAP Note 2000003 ("What kind of temporary and non-persisted tables can be created with SAP HANA?") for more information. |
Temporary tables | Memory > disk | Tables created with the TEMPORARY option are not persisted to disk. See SAP Note 2000003 ("What kind of temporary and non-persisted tables can be created with SAP HANA?") for more information. |
Single column and row store indexes | Memory > disk | Single column indexes and row store indexes aren't persisted to disk. See SAP Note 2160391 ("Are indexes persisted to disk?") for more information. |
Logically moved tables | Memory > disk | If tables are moved logically, their disk footprint can be significantly smaller than the size in memory. See SAP Note 1994408 for more information. |
Hybrid LOBs | Disk > memory | Large hybrid LOB values (SAP Note 1994962) are not loaded into memory, so the disk size of tables is larger than the memory size. |
Partially loaded tables | Disk > memory | If columns of a table are only partially loaded into the memory, the disk size is larger than the current memory size. You can use SQL: "HANA_Tables_LargestTables" (SAP Note 1969700) to check disk size, potential maximum memory size and current memory size. |
Data fragmentation | Disk > memory | A fragmented data area can significantly increase the disk requirements. You can use SQL: "HANA_Disks_Fragmentation" (SAP Note 1969700) to check for the amount of fragmentation on disk side. |
Database snapshots | Disk > memory | Database snapshots can result in significantly increased disk space requirements, because the before image of modified blocks needs to be stored in addition to the normal data blocks. Therefore you should make sure that old snapshots are deleted. SQL: "HANA_IO_Snapshots" (SAP Note 1969700) can be used to check for old snapshots. See SAP Note 2100009 for more information related to savepoints and snapshots. |
Garbage collection blocked | Disk > memory | Blocked persistence garbage collection can result in a significant increase of disk space. SAP Note 2169283 describes how to analyze issues with garbage collection. |
Large DELETE / TRUNCATE | Disk > memory | As described in SAP Note 2014987 the disk size can remain at a high level after having performed a large DELETE or TRUNCATE operation. The amount of allocated disk space can be 16 MB * <num_columns> * <num_partitions> in the worst case. Proceed as described in SAP Note 2014987 in order to reduce the allocated disk size. |
The following general rules can help to reduce memory requirements of SQL statements during execution:
Rule | Details |
As few rows as possible | Use as many restrictions as possible so that the amount of fetched records is as small as possible. |
As few columns as possible | Select as few columns as possible. Avoid "SELECT *" whenever possible. |
Avoid UNION ALL, UNION, INTERSECT, EXCEPT | These operations can't be handled by the column engine and so optimizations like late materialization (SAP Note 1975448) are not possible. As a consequence the memory requirements can significantly increase. Therefore you should use alternative whenever possible (e.g. OR instead of UNION or UNION ALL). |
BW: Configure safety belt | If BW queries read a large amount of data, check if it is possible to configure the query safety belt as described in SAP Note 1127156. |
You can use SQL: "HANA_Tables_LargestTables" (SAP Note 1969700) in order to check for the largest tables in memory. The following ORDER_BY settings are possible:
ORDER_BY | Details |
MAX_MEM | The tables (including indexes and LOBs) with the highest possible maximum memory consumption are shown. The maximum memory information is independent of the currently loaded columns and so it provides a general overview independent of the current load state. |
CURRENT_MEM | The tables with the highest current memory consumption (including indexes and LOBs) are displayed. |
TABLE_MEM | The tables with the highest current memory consumption (excluding indexes and LOBs) are displayed. |
INDEX_MEM | The tables with the highest index memory consumption are displayed. |
21. How much swap space should be configured for SAP HANA hosts?
It is recommended to configure a small swap space in order to avoid performance regressions at times of high memory utilization on operating system side. Instead it is usually better if activities are terminated with "out of memory" errors. This makes sure that the overall system is still usable and only certain requests are terminated. A good value for the swap space is 2 GB (see e.g. SAP Note 1944799 for SLES environments).22. How can I trigger a memory garbage collection?
Memory garbage collection is done in order to release no longer used memory. It is not required to perform this task manually as SAP HANA will automatically take care for this activity whenever required. In exceptional cases you can trigger memory garbage collection manually with hdbcons (SAP Note 2222218) using the following command:hdbcons 'mm gc -f'See SAP Note 2169283 for more information related to SAP HANA garbage collection.
23. Why do I get an OOM although the SAP HANA allocation limits aren't reached?
The following reasons can be responsible for OOM situations although neither the global nor the process specific allocation limits aren't reached:Reason | Details |
Operating system memory exhausted | Check if the available memory is exhausted on operating system side, e.g. because of external software allocating a lot of memory, large caches or another SAP HANA instance. Make sure that in the future there is always enough physical memory available to host the complete SAP HANA allocation limit. |
Small temporary process allocation limit | Based on the defined allocation limits SAP HANA and the current service memory allocations the temporary process allocation limit (TPAL) may be significantly smaller than the defined allocation limit. As a consequence OOMs are possible although the configured allocation limits aren't reached. SAP Note 2133638 describes a related startup issue that can happen as of Rev. 90. |
Statement memory limit reached | OOM dumps with "compositelimit" in their names are no global memory shortages. Instead they are linked to a defined statement memory limit. See "Is it possible to limit the memory that can be allocated by a single SQL statement?" above for more details. |
A detailed SAP HANA memory check and further general health checks and performance optimiaztions are performed as part of the SAP HANA Technical Performance Optimization Service (TPO). See SAP Note 2177604 for more information.
25. Why is the allocated memory in some heap allocators very large?
The column EXCLUSIVE_ALLOCATED_SIZE in monitoring view M_HEAP_MEMORY (respectively HOST_HEAP_ALLOCATORS) contains the sum of all allocations in this heap allocator since the last startup. Normally also a lot of deallocations happen, so the EXCLUSIVE_ALLOCATED_SIZE can be much higher than the currently allocated size. For example, if over time 100 MB are allocated and deallocated 10 times, the actual allocated size is 0, but EXCLUSIVE_ALLOCATED_SIZE would show 1 GB (10 * 100 MB).If the overall allocated memory is much higher than the overall used memory, the difference is usually free for reuse, so no longer heap allocator specific. Therefore the EXCLUSIVE_ALLOCATED_SIZE information can only be used to understand which heap allocators have the highest "throughput" in terms of memory allocations, but it is not helpful to understand the current memory situation.
26. Why does the delta storage allocate more memory with SAP HANA SPS >= 09?
With SAP HANA SPS 09 the delta storage was significantly adjusted. As a consequence the minimum memory footprint of the delta storage of a loaded empty column increased from around 2 KB to more than 8 KB. Having many empty tables with many columns this can increase the overall delta storage size by 10 GB and more. This is an expected behavior that can't be changed.
Header Data
Released On | 01.02.2016 16:48:46 |
Release Status | Released to Customer |
Component | HAN-DB SAP HANA Database |
Priority | Normal |
Category | How To |
nice .Very useful .Thank you for sharing sap fiori online access
ReplyDelete
ReplyDeleteReally I like your post.It was Nice post and very useful information
SAP HANA Online Access
NICE POST.
ReplyDeleteSSAP PP training
Thank you for your post. This is excellent information. It is amazing and wonderful to visit your site. And I would like to share some information about
ReplyDeletesap sd training
sap successfactors training in hyderabad