Wednesday, May 30, 2012

Embedding Image in mail body instead of sending it as an attachment

Get the path of image residing in MIME repository, fetch the binary content of the image, convert into BASE64 data and attach the base64 data into HTML mail body to embed image.
·         Create a z report program.
·         Get the image path from MIME repository (SE80): '/SAP/PUBLIC/image.jpg' (Path of image in mime repository which need to be embedded)
·         Get the binary content of the image.
  DATA: o_mr_api  type ref to if_mr_api,
                 is_folder  type boole_d,
                 l_current type xstring,
                 l_loio type skwf_io.

 IF o_mr_api is initial.
    o_mr_api = cl_mime_repository_api=>if_mr_api~get_api ( ).
  ENDIF.
      CALL METHOD o_mr_api->get
    EXPORTING
      i_url              = '/SAP/PUBLIC/image.jpg'
    IMPORTING
      e_is_folder        = is_folder
      e_content          = l_current
      e_loio             = l_loio
    EXCEPTIONS
      parameter_missing = 1
      error_occured      = 2
      not_found          = 3
      permission_failure = 4
      OTHERS             = 5.

*l_current will hold the image in a XSTRING 
·         Convert the binary image data into Base64.
  CALL FUNCTION 'SSFC_BASE64_ENCODE'
    EXPORTING
      bindata                        = l_current
   IMPORTING
     b64data                        = b64data.
  IF sy-subrc <> 0.  
  ENDIF.
·         Create mail body using HTML.
     In email body, the image will be displayed using its binary content.
    DATA lv_length TYPE I,
                 lv_len2 TYPE i.

  clear wa_mail_body.
  move '<html>' to wa_mail_body.
  append wa_mail_body to gt_mail_body.

  clear wa_mail_body.
  move '<head>' to wa_mail_body.
  append wa_mail_body to gt_mail_body.

  clear wa_mail_body.
  move '<title>hello</title>' to wa_mail_body.
  append wa_mail_body to gt_mail_body.

  clear wa_mail_body.
  move '<meta http-equiv="content-type" content="text/html;charset=iso-8859-   1">' to wa_mail_body.
  append wa_mail_body to gt_mail_body.

  clear wa_mail_body.
  move '</head>' to wa_mail_body.
  append wa_mail_body to gt_mail_body.

  clear wa_mail_body.
  move '<body>' to wa_mail_body.
  append wa_mail_body to gt_mail_body.

  clear wa_mail_body.
  wa_mail_body  = '<em><font'  .
  append wa_mail_body to gt_mail_body.

  clear wa_mail_body.
  wa_mail_body  = 'color="#0000ff" size="+7" face="arial,'.
  append wa_mail_body to gt_mail_body.

  clear wa_mail_body.
  wa_mail_body  = 'helvetica, sans-serif">test image</font></em>'.
  append wa_mail_body to gt_mail_body.

  clear wa_mail_body.
*add image base64 content
  wa_mail_body = '<img src="data:image/gif;base64,'.

  append wa_mail_body to gt_mail_body.

  clear wa_mail_body.

  lv_length = strlen (b64data).
  lv_len2 = lv_length / 255.

  wa_mail_body = b64data.
  append wa_mail_body to gt_mail_body.
  CLEAR wa_mail_body.


  DATA :lv_len3 TYPE I,
              temp1 TYPE i,
              temp2 TYPE i.
  do lv_len2 times.
    lv_len3 = 255 * sy-index.

    if lv_len3 <= lv_length.
      wa_mail_body = b64data+lv_len3.
      if wa_mail_body is not initial.
        append wa_mail_body to gt_mail_body.
        clear wa_mail_body.
      else.
        exit.
      endif.
    elseif lv_len3 > lv_length.

        exit.
    endif.
  enddo.

  wa_mail_body = '"alt="happy birthday" align="middle" width="304" height="228" />'.
  append wa_mail_body to gt_mail_body.

  clear wa_mail_body.
  move '</body>' to wa_mail_body.
  append wa_mail_body to gt_mail_body.

  clear wa_mail_body.
  move '</html>' to wa_mail_body.
  append wa_mail_body to gt_mail_body.
·         Send Mail.
           DATA : lt_mail_body TYPE soli_tab,
         lwa_mail_body LIKE LINE OF lt_mail_body.

  l_subject = 'Test : Image in mail'.

  lr_email_body = cl_document_bcs=>create_document(
                                   i_type = 'HTM'
                                   i_text = gt_mail_body
                                   i_subject = l_subject ).

  lr_email = cl_bcs=>create_persistent( ).
  lr_email->set_document( lr_email_body ).

  l_mail_address = 'test@test.com'.

  lr_receiver = cl_cam_address_bcs=>create_internet_address( l_mail_address ).
  lr_email->add_recipient( i_recipient = lr_receiver ).

  "Set Sender and send mail
  l_sender = cl_sapuser_bcs=>create( sy-uname ).
  lr_email->set_sender( l_sender ).
  lr_email->set_send_immediately( 'X' ).  "Send email directly
  l_send_result = lr_email->send( i_with_error_screen = 'X' ).

  COMMIT WORK.
Output:
Go to SAP inbox check mail for the image embedded in the mail.

Saturday, May 12, 2012

Distributing material master idoc using changepointers

To create material master idocs without the use change pointers execute transaction BD10
The step to distribute material master idocs to other systems using change pointers are:
• Create logical system for the receiver system: BD54
• Create distribution model: BD64
• Activate change pointers for message type MATMAS: Transaction BD50
• Add MATMAS message type to the Outbound parameters for the partner profile for the receiver system: WE20
• After the a material has been changed - Creater idocs from change pointers: BD21
Program RBDMIDOC which generates idocs from changepointers, can be schedules to run automatically

SAP FI - Parked Documents Transactions and Tables

Transaction codes

FBV0 : Poste parked document
FBV2 : Change
FBV3 : Display
FBV4 : Change Header
FBV5 : Display Changes
FBV6 : Refuse
FV50 : Post / Delete : Single Screen Transaction

Tables

Parked document header data goes to bkpf table but item level data does NOT go to bseg table. Instead the tables below are used:
VBSEGA - Assets parked document detail
VBSEGD - Customers parked document detail
VBSEGK - Suppliers parked document detail
VBSEGS - General Ledger  parked document detail

Thursday, May 3, 2012

ABAP Performance Tuning Tips

Tools that can be used to help with performance tuning.
1. ST05 is the performance trace. It contains the SQL Trace plus RFC, enqueue and buffer trace. Mainly the SQL trace is is used to measure the performance of the select statements of the program.
2. SE30 is the Runtime Analysis transaction and can be used to measure the application performance.
3. SAT transaction is the replacement SE30. It provides same functionality as SE30 plus some additional features.
4. ST12 transaction (part of ST-A/PI software component) is a combination of ST05 and SAT. Very powerful performance analysis tool used primarily by SAP Support.

5. One of the best tools for static performance analyzing is Code Inspector (SCI). There are many options for finding common mistakes and possible performance bottlenecks.
Steps to optimize the ABAP Code
1. Database
a. Use WHERE clause in your SELECT statement to restrict the volume of data retrieved. 
b. Design the Query to use as many index fields as possible from left to right in the WHERE statement
c. Use FOR ALL ENTRIES in your SELECT statement to retrieve the matching records at one shot
d. Avoid using nested SELECT statement and SELECT within LOOPs, better use JOINs or FOR ALL ENTRIES. Use FOR ALL ENTRIES when the internal table is already there. Try JOINs if the SELECT statements are right behind each other.
e. Avoid using ORDER BY in SELECT statements if it differs from used index  (instead, sort the resulting internal table), because this may add additional work to the database system which is unique, while there may be many ABAP servers
f. INDEX: Creation of Index for improving performance should not be taken without thought. Index speeds up the performance
but at the same time adds two overheads namely; memory and insert/append performance. When INDEX is created, memory is used up for storing the index and index sizes can be quite big on large transaction tables! When inserting a new entry in the table, all the indices are updated. More indices, more time. More the amount of data, bigger the indices, larger the time for updating all the indices
g. Avoid Executing an identical Select (same SELECT, same parameter) multiple times in the program
h. Avoid using join statements if adequate standard views exist.

2. Table Buffer
a. Defining a table as buffered (SE11) can help in improving the performance but this has to be used with caution. Buffering of tables leads to data being read from the buffer rather than from table. Buffer sync with table happens periodically, only if something changes. If this table is a transaction table chances are that the data is changing for particular selection criteria, therefore application tables are usually not suited for table buffering. Using table buffering in such cases is not recommended. Use Table Buffering for configuration data and sometimes for Master Data
b.  Avoid using complex Selects on buffered tables, because SAP may not be able to interpret this request, and may transmit the request to the database. The code inspector tells which commands bypass the buffer.


3. Internal Table
a. Use sorted tables when nested loops are required.
b. Use assign (field symbol) instead of into in LOOPs for table types with large work areas
c. Use READ TABLE BINARY SEARCH with large standard tables speed up the search. Be sure to sort the internal table before binary search.
d. Use transaction SE30 to check the code

4. Miscellaneous
a. PERFORM: When writing a subroutine, always provide type for all the parameters. This reduces the overhead which is present when the system determines on its own each type from the formal parameters that are passed.

Which is the better - JOINS or SELECT... FOR ALL ENTRIES?
The effect of FOR ALL ENTRIES needs to be observed first by running a test program and analyzing SQL trace. Certain options set by BASIS can cause FOR ALL ENTRIES to execute as an 'OR' condition. This means if the table being used FOR ALL ENTRIES has 3 records, SQL Trace will show 3 SQLs getting executed. In such a case using FOR ALL ENTRIES is useless. However of the SQL Trace shows 1 SQL statement it's beneficial since in this case FOR ALL ENTRIES is actually getting executed as an IN List.
JOINS are recommended to be used till 5 joins. If the JOIN is being made on fields which are key fields in both the tables, it reduced program overhead and increases performance. So, if the JOIN is between two tables where the JOINING KEYS are key fields JOIN is recommended over FOR ALL ENTRIES.
You can use for all entries to reduce the database hits, and use non-key fields.
Here is a code with join :
SELECT A~VBELN A~KUNNR A~KUNAG B~Name1
into table i_likp
FROM LIKP AS A
  INNER JOIN KNA1 AS B
    ON A~kunnr = B~KUNNR.

* For with limited data using for all entries:
* Minimize entries in I_likp by deleting duplicate kunnr.

LOOP AT i_likp INTO w_likp.
  w_likp2-KUNAG = w_likp-KUNAG.
  APPEND w_likp2 TO i_likp2.
ENDLOOP.

SORT i_likp2 BY kunnr.
DELETE ADJACENT DUPLICATES FROM i_likp2 COMPARING kunnr.
* GET DATA FROM kna1
IF NOT i_likp2[] IS INITIAL.
  SELECT kunnr name1
    INTO TABLE i_kna1
    FROM kna1
    FOR ALL ENTRIES IN i_likp2
    WHERE kunnr = i_likp2-KUNNR.
ENDIF.


Avoid use of nested loops
When a nested loop has to be used, use a condition for the inner loop.  Otherwise in the production environment it may be possible that the loop takes a lot of time and dumps.
loop at itab1.
  loop at itab2 where f1 = itab1-f1.
  ....
  endloop.
end loop.

Another option is to use READ with BINARY SEARCH for the second table.
SORT itab2 BY f1.loop at itab1.
  Read table itab2 with key f1 = itab1-f1 BINARY SEARCH.
  if sy-subrc = 0.
    idx = sy-tabix.
    loop at itab2 from idx.
      if itab2-f1 <> itab1-f1.
        exit.
      endif.
     ....
    endloop.
  endif.
endloop.

SAP ABAP Fine Tuning Developments


Factors influencing Performance Tuning
§          Database Operation
§          Application Code
§          Memory Handling
§          Tools for Tuning
-          SQL Trace
-          Runtime Analysis
Database Operation
§          Database Access
§          Data volume and Network Load
§          Bulk retrieval or update
§          Frequency of Communication between Application Program & Database System (Array operation over Single Row)
§          Usage of Index
§          Table buffering
Design Stage Considerations
§          Restrict Data Selection
§          Make more mandatory fields..
§          If initial data selection itself is huge.
-          warn user about performance before proceeding further
-          Terminate further processing with suggestion to run with more restriction
-          Suggest Background operation
§          Designing the selection screen :
-          Correctly group the fields which are used for filtering the data during selection.
-          group the fields which would be used for only restricting the output.
User then would know, which can improve response and which cannot.
§          If the report is used for on-line purpose only, drill down report would be better alternative.
§          Get more information only when you need it!
§          Check current data size and the expected growth rate.
Effective Select Statement
§          Specify by Key fields while selection
§          Avoid select within loops / select.. Endselect
§          Inner join / views better than nested selects / multiple selects
§          Take care 'For All Entries in Table'
-          Initial or Not
-          Large number of records
§          Keep Select Clause simple
§          Into Corresponding Fields Vs Into Table
Data Selection from database
§          INTO TABLE with the SELECT statement provides faster access to the database than using an APPEND statement with SELECT
§          The INTO TABLES statement groups the records which accounts for reduction in network traffic whereas  APPEND statements access the DATABASE for individual records
§          For data which is used once the SELECT  ENDSELECT loop is effective since  the INTO TABLE statement collects  all the data of a table in the memory
§          Use package size option, Data is put in table in packets of size n
§          Use "where" clause with index rather than select all and check
Aggregate Clauses
§          Use ORDER BY Clauses only if indexes are taken
§          Do not use ORDER BY if data selected is more ( more than 20% of total )
§          Statements like COUNT, SUM, MIN,  MAX, AVG are done at the database level
§          Usage of these statements avoid  transferring large amounts of data to an application for calculating its aggregate
§          These aggregate functions can also be combined well with the GROUP BY statement
INDEXES   
•          An index can be considered a sorted  copy of a database table of certain fields.
•           Sorting provides faster access to the data records of the table, ( binary search ).
•           Indexes are of two types:- Primary and Secondary
•          .The primary index contains the key fields of the table and a pointer to the non-key fields
•           The primary index is created automatically when the table is created in the database.
•           Other indexes created are called secondary indexes.
•           Table is accessed frequently with fields other than fields of primary index
Programming using Indexes
§          Data being selected
§          Volume of data
§          Order of fields in Index


What to Keep in Mind for Secondary Indexes                               
§          When table contents change, indexes are readjusted
§          Slows down database insertion
§          Frequently accessed tables should not have too many indexes
§          Create indexes only when that index is likely to be used frequently
§          Indexes created may not be used to one's liking.
§          Determined by database optimizer by an algorithm
§          Indexes should have few common fields. Too many common fields would
   create problems for the optimizer
§          Use SQL Trace for determining indexes
Type of Index Scans
§          Unique Index Scan :- The entry specified by that index is unique
§          Range Scan:-              Most frequent, a range of values scanned
§          Full Table Scan:-          No index used, full table scanned for data
Buffering - Important Points to consider
§          The database interface of SAP provides buffers on each application server which allows local storage of database tables
§          Access to data in tables which are buffered can take place from application server instead of accessing the database directly
§          Table buffers enhance the performance of a system by reducing the number of times the database is accessed through the network for data
§          The performance improvement due to table buffers on a systems with several application servers is considerably more compared to a central system with only one application server
§          But on a central system with one application server a noticeable effect on the performance comes due the reduction in the number of processes when a buffer is accessed instead the database
§          The buffer values remain unchanged when a read access is  made. If the application changes the data the changes are made on the database and then the buffer on the corresponding application server
§          When data is changed on the database it is logged in the DDLOG table on the database
§          Buffers on other application servers  are updated at  intervals of one to two minutes
§          This is done with the help of the log maintained in the DDLOG table. When the synchronizing mechanism is run, the log invalidates the buffers on all other application servers
§          This causes all the other application servers to access the database directly for data and update their buffers the next time when data is needed
Operations on Internal Tables
§          Copying of Internal Tables - Copy whole table rather than line by line
§          Do not delete records of the same internal table within loop
§          Delete records using where clause
Memory Management
§          Avoid unnecessary variables
§          Use local variables in modularization units
§          Transfer key information to the Calls
§          Free Internal table no longer in use
§          Optimize usage of bulk data, memory and processing overhead
SQL Trace (ST05)
§          Overview
§          Understanding What is Measured
§          Creating an SQL Trace Data File
§          Calling an SQL Trace Data File
§          Analyzing an SQL Trace Data File
SQL Trace: Overview
§          The SQL statements that the application uses.
§          Shows what the ABAP processor is requesting from the database
§          Lists actual duration of each database request
§          What database accesses or changes occur in the update section of the application
§          How the system translates ABAP OPEN SQL commands (such as SELECT) into standard SQL commands
§          Gives index usage and available indexes
§          Presents types of table scans used on fields
§          Red flags questionable SQL statements
§          Where the application makes unnecessary database accesses or repeated accesses
§          Gives index usage and available indexes
§          Presents types of table scans used on fields
§          Red flags questionable SQL statements
§          Where the application makes unnecessary database accesses or repeated accesses
From the time the trace function is turned on to the time it is turned off again, all database activity occurring either for a specific user or for an entire system is recorded. The SQL Trace tool measures the following database requests:
Buffering of Database Requests
-          To keep the number of runtime-consuming PREPARE calls small, each an application's work processes hold a certain number of already translated SQL statements in a special buffer. By default, a process holds up to 250 statements.
-          If the system must execute a specific OPEN SQL, the system first checks whether this statement is stored in the "statement cache". If the statement is in the cache, the system executes it immediately using a REOPEN (SELECT) or a REEXEC (INSERT, UPDATE, DELETE).

Buffering of Database Requests (cont'd)
-          If the statement is not buffered, a PREPARE operation prepares it for the subsequent OPEN/EXEC. The system administers the buffer according to the LRU algorithm ("least recently used"). When space is needed for new statements, the statements that are rarely used are deleted. As a result of the LRU algorithm, the statement must prepare frequently used statements usually only once.
-          An application server buffers the DECLARE, PREPARE, OPEN, and EXEC requests within the cursor cache of one work process. As a result, once the system opens a cursor for a DECLARE operation, it can use this cursor over and over again within the same work process.
Understanding What is Measured
§          Logical Sequence of Database Requests
-          Database requests are interconnected and always occur in the same logical sequence. The DECLARE function defines and numbers the cursor. DECLARE precedes the PREPARE function. Use PREPARE to prepare a specific database statement, such as:
-          select * from sflight where carrid eq 'LH'.
-          and define the access method before the system can transfer the request to the database. During this preparation, the system is concerned only with the structure of the SQL statement and not with the values it contains.
-          The OPEN function takes the prepared SELECT statement and completes it with the correct values. In the above example, OPEN would issue the field carrid the value LH.
Creating an SQL Trace Data File
§          Go to the program to be SQL traced
§          If the program has a selection screen, execute the program and bring it to the selection screen
§          From the menu, choose System > Create Session, and type /nst05 in the transaction entry field of the new session.  Select the Trace on button.
§          Go back to the program session, and execute the program (F8).  Once the program is through executing, return to the SQL Trace session and select Trace off.  Now select List Trace.   A detailed list of all database requests will appear, as shown on the next slide.
Retrieving Trace Data File
1) Call up the initial screen of the SQL Trace tool.
2) Choose List trace.
           The system asks to specify a trace file. The last trace that was run is suggested as the default value.
3) Ensure that the information is correct.
           If a trace is run using an * (asterisk) for the user name, enter * (asterisk) in the DB trace for user to retrieve the trace.
4) Choose OK.
General Tips
§          Visualize table growth and build suitable restrictions while on drawing board itself like limiting document selection by date / status / material etc.
§          SELECT less with right where clause than selecting all and then filtering.
§          Use effective table joins or use correct views.
§          Reduce database hits by selecting all info. in one shot rather than repeated access.
§          Use of secondary sales table may reduce time instead using main table and using custom indices.
§          Check whether right index is picked
§          Apply all performance related techniques while processing internal tables, especially when they are expected to be big.
§          Perform run time analysis and SQL trace to identify bottle necks