1
Is there a way to speed up the ODBC connection?
Question asked by Stan Paulauskas - 3/16/2022 at 2:34 PM
Answered
Use Case
I am using the ODBC driver to pump data out of Atrex and into an EDW using python (pyodbc) on Windows 10 Pro (x64). I have followed the instructions on the "Configure the Atrex/ElevateDB ODBC Driver" knowledge base article. I'm finding that the query performance even on small tables is incredibly slow. 

I'm using the "information.Tables" table as as a benchmark. Executing
select * from information.Tables
takes approximately 30 seconds to complete, see log below. This table has 102 rows and 23 columns. I've performed a similar query against a MySQL database table with 1579 rows and 169 columns. The logging below demonstrates the performance I'm getting with the same base code just swapping out the DSNs and tables. I have posted my test code to a GitHub Gist. Note that I don't have permission to post hyperlinks. Here's the link: https://gist.github.com/xia-stan/c804d4479de360283577bebba2ce6487

Logging information for Query to Atrex:
2022-03-16T14:15:07.626 - INFO - Initializing Connection
2022-03-16T14:15:09.923 - INFO - Getting cursor
2022-03-16T14:15:09.923 - INFO - Executing 'select * from information.Tables'
2022-03-16T14:15:10.177 - INFO - Fetching records
2022-03-16T14:15:42.685 - INFO - Result set contains 102 records.
2022-03-16T14:15:42.685 - INFO - Processing results of cursor.fetchall()
2022-03-16T14:15:42.841 - INFO - Outputting to json
2022-03-16T14:15:42.933 - INFO - Execution complete
Logging information for Query to MySQL:
2022-03-16T14:19:29.735 - INFO - Initializing Connection
2022-03-16T14:19:30.095 - INFO - Getting cursor
2022-03-16T14:19:30.095 - INFO - Executing 'select * from contacts'
2022-03-16T14:19:31.655 - INFO - Fetching records
2022-03-16T14:19:32.065 - INFO - Result set contains 1579 records.
2022-03-16T14:19:32.065 - INFO - Processing results of cursor.fetchall()
2022-03-16T14:19:32.085 - INFO - Outputting to json
2022-03-16T14:19:32.230 - INFO - Execution complete
What can I do to improve the ODBC performance to the Atrex database?

8 Replies

Reply to Thread
0
Terry Swiers Replied
Employee Post
First of all, make sure that you have the compression level set to anything other than "None".  

ElevateDB and MySQL handle large text fields differently.  MySQL always sends the large text fields as part of the data response.  EDB only retrieves the large text fields when they are actually read, so it takes an extra couple of trips back and forth to the server for each record that contains a large text field as it is accessed.

Unless you actually need them, specify the actual fields you want to retrieve from the schema to avoid pulling down the Description, Attributes, CreateSQL, and DropSQL columns.  Assuming that you just need the table names, use the following:  

select name from information.tables

0
Stan Paulauskas Replied
I have put the compression to maximum. 

Yes, selecting * is almost always a bad idea. In this case I'm trying to make a direct copy of the table to MySQL. I've noticed that this also happens on the codes table when I specify only a few columns. For the actual data tables we'll need all of the columns so that we can execute our reporting layer properly.

0
Terry Swiers Replied
Employee Post
You will see the same thing with the code table if you include either the Notes or OnlineDescrHTML columns.  

You can also play a bit with the read ahead rows.  If you are pulling down entire tables, bump that to 25 or 50.
0
Stan Paulauskas Replied
I've updated my test to select everything from information.Tables except the CreateSQL, and DropSQL. It still takes 16 seconds to return 102 rows of data. Further removing Description and Attributes reduces this to 8 seconds. While the CLOBs and text fields certainly contribute, it doesn't appear to be the full story. 

I updated the MySQL comparison to use a table with a CLOB column (LONGTEXT), and populated the table with 1000 rows. This returns the data in less than a second. 

Where do I set the read ahead rows? I can't find any information about setting the value in the Elevate DB documentation or forums. I did find a discussion of it on 
but, it didn't mention how to set the number of rows. 
 
0
Terry Swiers Replied
Employee Post Marked As Answer
The read ahead rows is just an option of the odbc connection for the ElevateDB ODBC driver.  You will find it just below where you set the compression for the ODBC connection.

I don't think that the performance issue is the ODBC driver itself, but rather the way that ElevateDB handles the data transmission.  Using the standard read ahead of 10 records with the exact same data, just one endpoint being a local server and the other being a remote server across the country, here are the results.  Each of these tests retrieve the results and save them to a local json file to guarantee that all of the data is being retrieved from the server.

Local Server - Avg Latency <1ms

Query: select name from information.tables
Start: 03/17/2022 10:12:52 AM
Record Count: 125
End: 03/17/2022 10:12:52 AM
Elapsed Time: 48ms

Query: select * from information.tables
Start: 03/17/2022 10:12:52 AM
Record Count: 125
End: 03/17/2022 10:12:53 AM
Elapsed Time: 62ms

Query: select stockcode, onhand, descr, price from code where onhand <> 0
Start: 03/17/2022 10:12:53 AM
Record Count: 3176
End: 03/17/2022 10:12:53 AM
Elapsed Time: 906ms

Query: select * from code where onhand <> 0
Start: 03/17/2022 10:12:53 AM
Record Count: 3176
End: 03/17/2022 10:12:55 AM
Elapsed Time: 1547ms

Complete


Remote Server - Avg Latency = 70ms
 
Query: select name from information.tables
Start: 03/17/2022 10:04:38 AM
Record Count: 125
End: 03/17/2022 10:04:39 AM
Elapsed Time: 1266ms

Query: select * from information.tables
Start: 03/17/2022 10:04:39 AM
Record Count: 125
End: 03/17/2022 10:05:05 AM
Elapsed Time: 25343ms

Query: select stockcode, onhand, descr, price from code where onhand <> 0
Start: 03/17/2022 10:05:05 AM
Record Count: 3176
End: 03/17/2022 10:05:11 AM
Elapsed Time: 6845ms

Query: select * from code where onhand <> 0
Start: 03/17/2022 10:05:11 AM
Record Count: 3176
End: 03/17/2022 10:09:01 AM
Elapsed Time: 230015ms

Complete
If the ODBC driver was the bottleneck, the performance of the local data retrieval would be significantly slower.  The biggest factor here appears to be the latency between the ODBC driver and the data source.
0
Stan Paulauskas Replied
I've checked the ODBC configuration and don't see anything about read-ahead rows. Here's the Wizard page that deals with compression. Maybe I have an older version of the driver? I'm using version 2.32.0.1. 

0
Terry Swiers Replied
Employee Post
I have sent you an email with a link to the current components.  
0
Stan Paulauskas Replied
Thanks for the help! The read ahead rows didn't help much, but I did manage to track down the potential issue. Our instance is running on a VM with 1 core and 2 GB RAM. Obviously, this isn't ideal for this particular use-case. I'll see if bumping this up resolves the performance issues. Thank you for your patience while we worked through this!. 

Reply to Thread