Quantcast
Channel: Informix Fun Facts
Viewing all 15 articles
Browse latest View live

How to enable automatic compression when loading empty table

$
0
0

Loading an empty table with compressed data is not trivial. In order for Informix to build the compression dictionary, the table must contain some data but in the case of an empty table, there is no data.  To help solve this problem, we can utilize the database scheduler to monitor a specific table such that when it contains enough rows, a compression dictionary will be built so that the remaining rows inserted into the table will be compressed.

To simplify this operation utilize a very simple database scheduler task.  This task will monitor a specific table in the background, waiting for any fragment to reach a specific number of rows then build a compression dictionary on that fragment.  When all fragments in a table have a dictionary built or a time out value is exceed then the database scheduler task will terminate.

First we are going to create to configuration values for a task in the sysadmin database.  We do this by inserting values into the ph_threshold table in sysadmin.

INSERT INTO ph_threshold
    (id,name,task_name,value,value_type,description)
    VALUES
    (0,"COMPRESSION TABLE TIMEOUT", "compress_table","900", "NUMERIC",
    "The timeout values in seconds for this task."
    );
INSERT INTO ph_threshold
    (id,name,task_name,value,value_type,description)
    VALUES
    (0,"COMPRESSION TABLE ROW COUNT", "compress_table","2000", "NUMERIC",
    "The number of rows in a fragment before a compression dictionary will be created."
     );

Next we create the task to execute.  This task is a little different than typical task as it will never be schedule to run and in fact is disabled.  This task in only execute by an end user manually invoking the task with a form of the exectask() function.  The reason for making this a database schedule task is that we can execute tasks in the background,  asynchronously to the running program.  The insert statement below will define the task, but not the stored procedure, compress_table, executed by the task

INSERT INTO ph_task
 (
 tk_name,
 tk_type,
 tk_group,
 tk_description,
 tk_execute,
 tk_start_time,
 tk_stop_time,
 tk_frequency,
 tk_delete,
 tk_enable
 )
 VALUES
 (
 "compress_table",
 "TASK",
 "TABLES",
 "Task to be kicked off when loading a table to ensure data is compressed",
 "compress_table",
 NULL,
 NULL,
 INTERVAL ( 1 ) DAY TO DAY,
 INTERVAL ( 30 ) DAY TO DAY,
 'f'
 );

 

TASKS STORED PROCEDURE

The last component is the stored procedure which is executed by the task.  The task takes in three arguments, two are traditional task and the last is the table name upon which auto compression will be enabled on.

CREATE FUNCTION compress_table(task_id INTEGER, task_seq INTEGER, tabname LVARCHAR )
   RETURNING INTEGER

   DEFINE timeout INTEGER;
   DEFINE fragments_left INTEGER;
   DEFINE row_count INTEGER;
   DEFINE fragid INTEGER;
   DEFINE rc INTEGER;
   DEFINE cnt INTEGER;
   DEFINE created_at DATETIME YEAR TO SECOND;

   -- Get the config thresholds
   SELECT MAX(value::integer) INTO timeout
        FROM sysadmin:ph_threshold
        WHERE name = "COMPRESSION TABLE TIMEOUT";
   IF timeout IS NULL THEN
        LET timeout = 900;
   ELIF timeout < 0 THEN
        LET timeout = 10;
   ELIF timeout > 3600 THEN
        LET timeout = 3600;
   END IF

   SELECT MAX(value::integer) INTO row_count
        FROM sysadmin:ph_threshold
        WHERE name = "COMPRESSION TABLE ROW COUNT";
   IF row_count IS NULL OR row_count < 1000 THEN
        LET row_count = 1000;
   END IF

   BEGIN
        ON EXCEPTION
            DROP TABLE IF EXISTS pt_list;
            INSERT INTO ph_alert
              (ID, alert_task_id,alert_task_seq,alert_type,
               alert_color, alert_object_type,
               alert_object_name, alert_message,alert_action)
            VALUES
              (0,task_id, task_seq, "INFO", "YELLOW",
              "SERVER","compress_table",
              "Failed to build compression dictionaries on " ||TRIM(tabname),
              NULL);
        END EXCEPTION

   IF tabname IS NULL THEN
      RETURN -1;
   END IF

   LET fragments_left = 99;
   LET cnt = 0;

   SELECT P.lockid
     FROM sysmaster:systabnames T, sysmaster:sysptnhdr P
     WHERE TRIM(t.dbsname)||":"||TRIM(T.tabname) = LOWER(tabname)
     AND P.lockid = T.partnum
     AND P.nkeys = 0
     AND bitand( P.flags, '0x08000000' ) = 0
     INTO TEMP pt_list WITH NO LOG;

     CREATE INDEX ix_temp_pt_list ON pt_list(lockid);

     WHILE ( timeout > 0 AND fragments_left > 0 )
         FOREACH SELECT P.partnum
            INTO fragid
            FROM pt_list L, sysmaster:sysptnhdr P
            WHERE l.lockid = P.partnum
            AND P.nrows > row_count
            AND bitand( P.flags, '0x08000000' ) = 0

            LET rc = admin('fragment create_dictionary', fragid);
            IF rc >= 0 THEN
                DELETE FROM pt_list WHERE lockid = fragid;
                LET cnt = cnt + 1;
            END IF

        END FOREACH
        SELECT NVL( count(*) , 0 )
             INTO fragments_left
             FROM pt_list L, sysmaster:sysptnhdr P
             WHERE l.lockid = p.partnum
             AND P.nkeys = 0
             AND bitand( P.flags, '0x08000000' ) = 0;

        LET rc = yieldn(1);
        LET timeout = timeout - 1;
      END WHILE
  END

  DROP TABLE IF EXISTS pt_list;
  INSERT INTO ph_alert
              (ID, alert_task_id,alert_task_seq,alert_type,
               alert_color, alert_object_type,
               alert_object_name, alert_message,alert_action)
           VALUES
              (0,task_id, task_seq, "INFO", "GREEN",
              "SERVER","compress_table",
              "Built "||cnt||" compression dictionaries on " ||TRIM(tabname),
              NULL);

  RETURN 0;

END FUNCTION;

EXAMPLE

Below is an example of how to execute the auto compression task.  We create an empty table then start the compress_table task.  The exectask_aysnc() function takes the name of a database scheduler task and an optional argument.   We wait one second to ensure the task was fully started, then start some load activity.  Once this table has reached its the threshold the dictionary will automatically be created, and all rows inserted after that point will be compressed.  While it is work noting that there will be a few thousand rows in our table that will not be compressed because they were inserted before the dictionary was created.

create table t1 (c1 serial, c2 char(500));

execute function sysadmin:exectask_async("compress_table","stores_demo:t1");
execute function sysadmin:yieldn(1);

--Load activity
insert into t1 select 0, tabname from systables,syscolumns;
insert into t1 select 0, tabname from systables,syscolumns;
insert into t1 select 0, tabname from systables,syscolumns;
insert into t1 select 0, tabname from systables,syscolumns;
insert into t1 select 0, tabname from systables,syscolumns;
insert into t1 select 0, tabname from systables,syscolumns;
insert into t1 select 0, tabname from systables,syscolumns;
insert into t1 select 0, tabname from systables,syscolumns;
insert into t1 select 0, tabname from systables,syscolumns;
insert into t1 select 0, tabname from systables,syscolumns;
insert into t1 select 0, tabname from systables,syscolumns;

OpenAdmin Tool (aka OAT) is now installed as part of CSDK

$
0
0

For those who have not heard, the OpenAdmin Tool (aka OAT) is now included with the CSDK product. This has made OAT much simpler to install. In addition, the latest improvement no longer requires a reboot when installing on a windows operating system newer than XP. (Yes if you are installing OAT on windows XP a reboot is required).

Determining the Efficiency of the Informix Virtual Processors

$
0
0

The efficiency column in the onstat -g glo output (see below) is an indicator of how well the oninits are being scheduled by the operating system. If the efficiency is to low for the cpu vps the performance of the database server will be effected. The “efficiency column” is there to provide a simple number to gauge how well the database server and operating system play together. Hopefully they are cooperating buddies.

MT global info:
sessions threads  vps      lngspins
0        26       7        0       

          sched calls     thread switches yield 0   yield n   yield forever
total:    655             607             69        71        144      
per sec:  152             141             3         33        27       

Virtual processor summary:
 class       vps       usercpu   syscpu    total   
 cpu         1        215.94     0.10     216.04   
 aio         1         0.00      0.00      0.00    
 lio         1         0.00      0.00      0.00    
 pio         1         0.00      0.00      0.00    
 adm         1         0.00      0.00      0.00    
 msc         1         0.00      0.00      0.00    
 fifo        1         0.00      0.00      0.00    
 total       7        215.94     0.10     216.04   

Individual virtual processors:
 vp    pid     class    usercpu   syscpu    total     Thread    Eff  
 1     2025    cpu     215.94     0.10     216.04    247.93     87%
 2     2027    adm      0.00      0.00      0.00      0.00       0%
 3     2028    lio      0.00      0.00      0.00      0.26       0%
 4     2029    pio      0.00      0.00      0.00      0.07       0%
 5     2030    aio      0.00      0.00      0.00      1.45       0%
 6     2031    msc      0.00      0.00      0.00      0.06       0%
 7     2032    fifo     0.00      0.00      0.00      0.00       0%
               tot     215.94     0.10     216.04

In short, the formula behind the scenes is very simple ratio of

( Total Operating System time executing on a hardware processor)  divided by ( Total time of the Informix threads executing on the Virtual Processors)

In the stats you provided below:

VP #1
=====
Total time given to the operating system processor  216.04
Total time of all threads running on the VP 247.93

This means the Informix threads where running on the oninit processes for (247.93 - 216.04 = ) 31.89 seconds without finding a hardware processor available to execute on.  While we do realize Unix is sharing the resources,  if the efficiency drops below 80% one should start looking at the marriage of the database server to the operating system.

How to create a stored procedure which emulates a very fast dbexport in dirty read mode.

$
0
0

This is a simple stored procedure which will export the current database to the specified directory without locking the database. While this is a very simple procedure it can be extend very easily to accommodate parallel unloads based on the size of a table, scan tables in parallel and many other ways to customize this produce for your own needs.

 

CREATE FUNCTION export_system(load_dir VARCHAR(50) ) 
       RETURNING INTEGER, INTEGER
DEFINE rc                INTEGER;
DEFINE bad               INTEGER;
DEFINE create_ext_tab    LVARCHAR(8192);
DEFINE ins               LVARCHAR(512);
DEFINE drop_ext_tab      LVARCHAR(512);
DEFINE tname             VARCHAR(250);
DEFINE dbschema          LVARCHAR(1024);
DEFINE dbname            VARCHAR(250);
DEFINE numrows           INTEGER;

LET rc=0;
LET bad=0;
LET dbname = DBINFO("dbname");
LET dbschema = "dbschema -q -d "||TRIM(dbname)||" -it DR -ss "||
                TRIM(load_dir)||"/"||TRIM(dbname)||".sql";

SYSTEM dbschema;

SET ISOLATION TO DIRTY READ;

FOREACH SELECT   TRIM(tabname) ,nrows
     INTO tname,numrows
     FROM systables
     WHERE tabid >99 AND tabtype = "T"

     LET create_ext_tab = "CREATE EXTERNAL TABLE "||tname||"_ext "||
        " SAMEAS "||tname||" USING (" ||
        "DATAFILES('DISK:"||load_dir||"/"||tname||".unl'),"||
        "FORMAT 'DELIMITED', "|| "DELIMITER '|', "||
        "RECORDEND '', "||  "DELUXE, ESCAPE, "||
        "NUMROWS "|| numrows+1||", "||    "MAXERRORS  100, "||
        "REJECTFILE '"||load_dir||"/"||tname||".reject' " ||
        " )";

     LET ins = "INSERT INTO "||tname||"_ext SELECT * FROM "||tname;
     LET drop_ext_tab = "DROP TABLE "||tname||"_ext";

     EXECUTE IMMEDIATE create_ext_tab;
     EXECUTE IMMEDIATE ins;
     EXECUTE IMMEDIATE drop_ext_tab;

     LET rc = rc + 1;

END FOREACH

RETURN rc, bad;

END FUNCTION;

 

Article on the OpenAdmin Tool Health Advisor

$
0
0

If you want to read a good article about the OpenAdmin Tool Health Advisor see the following  http://www.ibm.com/developerworks/data/library/techarticle/dm-1303openadmin/index.html.

This article is a deep dive into the Health Advisor plug-in for the OpenAdmin Tool for Informix®. As the graphical user interface for monitoring and administering the Informix database server, the OpenAdmin Tool for Informix (OAT) is an ideal vehicle for a health check system for your database server. With the introduction of the Health Advisor plug-in, you can use OAT to regularly analyze the health and performance of your Informix database server. This article covers how the Health Advisor plug-in works, how to use it, and how it can benefit an Informix DBA. It also provides a step-by-step guide to customizing the Health Advisor by adding your own health checks and alarms.

 

This describes the following:

Mobile OpenAdmin Tool

$
0
0

Use Mobile OAT to monitor on your smartphone a single IBM Informix server or a group of Informix servers. For example, you can view the online log and information about users such as commits, connection duration, and rows processed. You can find out which tables have the most inserts, updates, deletes, and scans. You can monitor a server’s free memory, processor usage, I/O activity, the number of sessions, and more.   For more details please see:

 

Product Download site
IBM Mobile OAT for Android devices https://play.google.com/store/apps/details?id=air.com.ibm.swg.im.MobileOAT
IBM Mobile OAT for iOS devices https://itunes.apple.com/us/app/ibm-mobile-openadmin-tool/id615822149?mt=8

 

Ensuring logical logs do not get to old.

$
0
0

If a user wants to ensure a log is closed after a specific duration, the following procedure and database scheduler task will accomplish this task.

  1. Check to see how old the current logical logs is
  2. If older than a specified interval then
  3. Switch to a new logical log
  4. Put a message in the alert system
DROP FUNCTION IF EXISTS max_log_duration(INTEGER, INTEGER);
CREATE FUNCTION informix.max_log_duration(task_id INTEGER, task_seq INTEGER)
   RETURNING INTEGER
 DEFINE value    INTEGER;
 DEFINE ret      INTEGER;

    LET ret = 0;
    SELECT CASE 
        WHEN dbinfo('UTC_TO_DATETIME',filltime) < CURRENT - tk_frequency                  THEN uniqid                  ELSE 0                  END          INTO value          FROM sysmaster:syslogfil L, sysadmin:ph_task          WHERE tk_name = "max_log_duration"          AND uniqid = (SELECT MAX(uniqid) FROM sysmaster:syslogfil                        WHERE bitand(flags, '0x2') = 0);    IF value > 0 THEN
       LET ret = sysadmin:admin('onmode','l');
       INSERT INTO ph_alert
              (ID, alert_task_id,alert_task_seq,alert_type,
               alert_color, alert_object_type,
               alert_object_name, alert_message,alert_action)
       VALUES
              (0,task_id, task_seq, "INFO", "GREEN",
              "SERVER","Logical Logs", 
              "Logical log "||value||" was idle too long, "||
              "switch to new log. command_id = "||ret,
                NULL);
   END IF

  return ret;

END FUNCTION;
DATABASE sysadmin;

DELETE FROM ph_task where tk_name = "max_log_duration";
INSERT INTO ph_task
(
tk_name,
tk_type,
tk_group,
tk_description,
tk_execute,
tk_start_time,
tk_stop_time,
tk_frequency
)
VALUES
(
"max_log_duration",
"TASK",
"TABLES",
"Ensure a logical log can not be current for more than a duration",
"max_log_duration",
NULL,
NULL,
INTERVAL ( 15 ) MINUTE TO MINUTE
);

How to find out the current Isolation Level

$
0
0

To acquire the isolation level the following store procedure will return a number representative the isolation level in the table below.

CREATE FUNCTION Get_Isolation_Level()
RETURNING INTEGER

RETURN (SELECT isolevel
FROM sysmaster:sysrstcb R, sysmaster:systxptab T
WHERE R.sid = DBINFO(“sessionid”)
AND T.address = R.txp);

END FUNCTION;

EXECUTE FUNCTION Get_Isolation_Level();

0        no transactions
1        dirty read (read only)
2        read committed data only
3        cursor record locked
5        repeatable reads
6        dirty read warning
7        dirty read (read only) retain U-locks
8        read committed data only retain U-locks
9        cursor record locked retain U-locks
10       dirty read warning retain U-locks
11       Last committed


OpenAdmin Tool Resources

$
0
0

OpenAdmin tool is growing very fast now that the newer version has been release. While the newest version was released with version 12.10 product, it will operate against all version 11 and 12 Informix database servers. Below is a list of some of the nice resources.

General OAT website

 Mobile OpenAdmin Tool Downloads

Nice Articles

 

 

Informix Chat with the Lab – Query Optimizer Enhancements in 12.1

$
0
0

Event Date:           Jul 25, 2013

Event Time:          10:30 AM – 12:00 PM (Central Time)

Presented By:

  • Jerry Keesee – Director, IBM Informix Database Development (IBM),
  • Bingjie Miao – Software Engineer (IBM)

This presentation will focus on using explain output to demonstrate recent improvements in query optimization such as view folding, sub-query flattening, and hash join support. Learn how to interpret various entities in explain file such as optimizer costing, directives, access methods, table order selection, filter analysis, sub query execution. Explore set operators, view folding enhancements with outer joins, subquery flattening with views, ANSI outer join to Informix outer join transformation, hash join support in ANSI JOIN, temp table optimization, predicate derivation in ANSI JOIN, etc. using explain text.

Register for the Event

Register for this event at:
https://events.na.collabserv.com/register.php?id=0dcccc796b&l=en-US

For technical questions, contact support at support@collabserv.com.
For questions about this event, contact the host at: mckeithe@us.ibm.com.

Informix Database Server integrated with Shaspa smart gateway solution on an ARM processor

$
0
0

Today (September 10, 2013) at IFA, the consumer electronics unlimited, Shaspa introduced their “In Home” Bridge which acts as a central hub for data from a large selection of sensors and actuators such as air conditioners, thermostats and lighting systems. Combined with the performance and reliability of the Informix database server and its native TimeSeries capability, a market-differentiating capability that optimizes management of time-oriented data, Shapsa is able to store more data and respond to consumer requests, faster and more accurately. The Shaspa Bridge has everything necessary for smart instrumentation. It can control any building or environment and implement features required for configuring devices and for the monitoring and control of those systems and appliances.

Informix’s award winning enterprises class embedded database is optimized for the ARM processor taking advantage of the reduced processor costs, heat and power usage.  Informix database server offers an all new class of service and functionality now available to ARM developers.

ARMDevice

 

 

Informix now support JSON and BSON datatypes natively

$
0
0

The release of Informix 12.10.xC2 on September 13, 2013 has some great new features leading off with support for JSON and BSON types natively in the server along with collections.   This native support comes with both typeless indexes, supported by mongo, and traditional SQL typed indexes, along with comparison function, distributed query, sharding, transaction support, stored procedures, support for automatic compression of collection data along with many other features that you would expect an enterprise class database would possess.

In addition to the JSON & BSON data types, compatibility with MongoDB client side drivers, which include C, C++, C#, Java, Phython, Erlang, Perl, Ruby and several others, was a key part of this release. This means an existing application using the mongo drivers can be pointed to Informix Database Server and operate with little or no modifications.  Programmers can also utilize any of the existing mongo drivers to program new applications against the Informix database server.

While many people have the need for JSON support they also have a need for traditional SQL support, especially transactional support.  People want to ensure they can remove inventory and create the proper invoices ensuring this work happens as a single unit (transaction support).  Inside the Informix Database Server this is not a problem,  you can access collections together with standard SQL tables inside a single transaction.  In addition you can encapsulate business logic inside stored procedures which operate on both JSON collection and SQL tables at the same time.   One of the most exciting capabilities is the ability to join collections to SQL tables and collection to other collections while utilizing the indexes created on the SQL tables and collections.  The DBA and/or programmer no longer have to decide upfront if it is better to use a system only of collections or a system only of SQL tables, but rather have a single database in which the programmer decides a if collection is optimal or a SQL table is optimal.

To enhance new users out of the box experience, the Informix Server has removed the fix size system and opted for an auto-tuning out of the box system.  The installation asks if you want a small, medium, large or extra large system, which directly relates to the amount of resources consumed on the host computer.  The newly installed instance will automatically adapt to the computer resources and do run time self-tuning of buffer, recovery logs, disk space and many other database and operating system resources.

The Informix OpenAdmin Tool (aka OAT) has been enhanced to allow users to monitor the collections along with all their other database objects, thus providing a seamless way of monitoring your NoSQL data along with you traditional SQL data.

 

 

Informix Selling on an ARM product

$
0
0

Thought this was interesting enough to share.  The first ARM product for IBM Informix and there is “IBM Informix Software” printed right on the box (along with Tatung and Shaspa). This is cool!!

InformixARM

 

Informix at the Information OnDemand Conference

$
0
0

While the conference is not over, there have been some great things that I have seen and heard at the conference.   First there is a vendor, shaspa,  who is displaying the full enterprises edition of Informix running on a computer (ARMv7) that will fit in your pocket.  Then I attend the Paddy Power (an online gaming company in Ireland) who explained their environment and how they achieved 2 million transaction a second on a single AIX P series computer also on the Informix Enterprise edition.  Is this not scalability?

Also at the conference the NoSQL JSON document store feature set released in 12.10 has been very well received.  The tutorial session had 250 people attend.  The hybrid storage and hybrid application capabilities have really hit the mark with many customers.   Being able to store both JSON documents and SQL tables in a single database while leveraging a single application using either MongoDB open source drivers or the traditional SQL drivers access both types of data has really hit the mark for enterprises customer technology requirements.

Hybrid App Development with Informix JSON Listener

$
0
0

My experience with the Informix JSON Listener and Mongo PHP driver for app development was not what I expected. It was in fact easier and more pleasant than I thought it would be. I had some experience with traditional relational databases before my internship, but only here at IBM Lenexa did I use Informix for the first time. When first assigned to work on a project that used both SQL and NoSQL technologies simultaneously, it seemed daunting. I had no idea how NoSQL worked, let alone combine it with regular SQL queries. Once I started to research more about MongoDB (whose driver was used to communicate with Informix via the JSON Listener), those feelings rapidly went away.

The concept behind the JSON Listener was very intuitive. It allows you to communicate to Informix, which is a relational database by nature, as a document collection database. Having previously worked with JSON for web pages that use AJAX and libraries such as JQuery, learning this Mongo syntax felt very familiar. In addition to this, the JSON Listener allows you to pass SQL statements to the database through the use of a special SQL collection. By doing this you can perform certain SQL only operations and still get logical results. This opened up new possibilities for developing apps by allowing to choose different models for different jobs. All this was provided without the need to install and PDOs, drivers or extensions besides the Mongo driver.

I used PHP for the back-end of the applications I worked on. From PHP’s point of view, the server “thinks” it is communicating with a MongoDB instance via the Mongo driver. In reality, the JSON Listener interfaces with the PHP server and the Informix server. However, both the collection model and regular relational table models are accessible via both the Mongo syntax JSON queries or the SQL statements via the SQL collection. This hybrid environment allowed me to use the advantages of each model when necessary, and quickly prototype certain functions using collections with flexible schemas. From the traditional SQL perspective, I have the power of transactions, joins, data types, sharding, analytics and other features present in Informix. If I needed a specific SQL function I could simply use the special SQL collection and pass it the SQL query. In these cases though one would need to sanitize statements manually due to the lack of PDOs which have built in functions that prepare the statements.

The Mongo side of the development perspective was useful for various reasons. As a developer I felt I could extract information directly from the web pages and store them on the database with no hassle. Using JavaScript objects and passing them to the server, they were already in the format that it would be stored. In PHP I could do validation if necessary before entering the data in the database. Furthermore, these document collections are queried using JSON objects via the Mongo PHP driver. This makes creating queries based on selected fields in the web pages very easy and eliminates the need to use prepared statements and sanitizing entries for fear of SQL injection. However, the real beauty of it all is that it is actually backed up not by MongoDB, but by a Informix database. The JSON Listener took care of the translations and operations behind the scene.

This is just the beginning of the possibilities for this hybrid development structure. I believe this structure will be more common in the future of app creation than simply choosing only SQL or only NoSQL. The more tools are given to developers, the more creative they can be and more choices they have for tackling their problems. That is what developers want, reliable and varied tools to solve their problems. Informix is working hard to provide that with the JSON Listener. This technology will help drive the next generation of web based applications in an industry where time is precious, rapid prototyping is preferred and scalability is key. With NoSQL capabilities infused to IDS, Informix is off to a great start.


Viewing all 15 articles
Browse latest View live