0
Items : 0
Subtotal :  0.00
View CartCheck Out
Chennai
Chennai
Singapore
Mumbai
Switzerland
+91 44 4091 2000 Mon - Fri 09:00 - 18:30 766/1, TEZ, Sakthi Towers 1, Anna salai, Chennai 600 002.
+6584181583 Mon - Fri 09:00 - 18:30 68 Circular Road, #02-01, 049422, Singapore.
+91 98848 35702 Mon - Fri 09:00 - 18:30 5, Powai Lake Heights, Mumbai 400 072
+41 (0)91 225 81 00 Mon - Fri 09:00 - 18:30 Equvera - ISV Techno SAGL, Via Ligornetto 6A, 6855 Stabio, Switzerland
Ceritified
ISO 9001:2018
The Best
#1 in India
#1 in Europe
#1 in Asian-pacific
Number #1
AUTOMATION SOLUTION PROVIDER
Talk to an Expert
0
Items : 0
Subtotal :  0.00
View CartCheck Out

Local (desktop) database

LOCAL DATABASE

Latest version:5.0.1 build 1126.November 28, 2025.

Latest version:5.0.1 build 1126.November 28, 2025.

The “Local Database” export plugin for Data Logger Suite creates ready-to-use files from parsed serial or network data without relying on third-party components. It writes directly to Microsoft Excel (XLS), RTF, HTML, CSV/TXT, XML, DBF, PDF and many other formats, improving throughput and eliminating unnecessary dependencies on third-party software on the server or workstation. For users and integrators this means predictable, high-speed exports to local or network drives, immediately usable for reporting, analysis, or archival.

Key capabilities and practical examples:

  • Multi-format output: choose the file type required by the data consumer. Example: a production monitoring system exports CSV for an automated quality check script, XLS for shift reports, or XML for integration with a product-tracking system.
  • Per-type format customization: define how numbers, dates/times and booleans are represented. Example: export temperature readings with two decimal places and a comma as the decimal separator; timestamps in YYYY-MM-DD hh:nn:ss for database import or DD/MM/YYYY for printed reports.
  • Column selection and ordering: explicitly define which parser variables become file columns and in what order. Example: production-line telemetry as SENSOR_ID, TIMESTAMP, PRESSURE, TEMPERATURE so the recipient always receives the expected data layout even if the data source changes.
  • Efficient buffering and write modes: buffer high-rate data in memory and use batched writes to reduce disk load. Modes: Immediate (slower, more robust), Timed flush, Idle flush (when no new data arrives). Example: a system that records thousands of measurements per minute uses batched writes to produce hourly XLS files without overloading the disk subsystem.
  • File naming and rotation: prefixes, parser variable values in filenames and date formats (hourly/daily/monthly/custom) for automatic rotation and easy file discovery. Example:data_{SAMPLE_ID}_YYYYMMDD.csv – daily files per sample ready for archiving.

Download Documentation

HOW IT WORKS (BRIEF)

The plugin receives parsed records from the parserand stores them in a temporary in-memory buffer. When configured conditions occur (time interval, number of records, or immediate write), the buffer is flushed to the selected file format. If a format does not support appending, the plugin accumulates all data in memory and rewrites the target file completely. Note: formats that require full rewrite can consume significant RAM to hold the entire file contents before writing – for long intervals (for example monthly files) plan memory accordingly or switch to more frequent rotation (daily/hourly).

Examples of use with Advanced Serial Data Logger (ASDL)

Scenario: a factory collects telemetry from PLCs via ASDL. Example of parsed input shown in the main program window:

TIMESTAMP=2026-01-02 08:12:03; PLC_ID=PLC12; TEMP=78.34; STATUS=OK

Export to CSV for the historian: configure the separator as “,” and text qualifier as ‘”‘ – resulting daily CSV:

TIMESTAMP,PLC_ID,TEMP,STATUS 2026-01-02 08:12:03,PLC12,78.34,OK

Scenario: remote sensors send a JSON string over TCP; the logger parses and maps fields. Example of parsed data:

SENSOR=WX101; TIME=2026-01-02T09:00:00; HUM=45.2; RAIN=0

CONFIGURATION RECOMMENDATIONS

  1. Choose the write mode according to data rate. For high-frequency sources use buffering and batched writes; for critical audit trails use Immediate mode.
  2. On network (mapped) drives create smaller, more frequent files – this reduces locks and speeds operations. Example: prefer hourly files instead of monthly ones for intensive telemetry.
  3. Use parser variable placeholders in filename templates to split data automatically by the desired key: data_{SENSOR}_YYYYMMDDHH.csv will create files per sensor per hour, simplifying parallel processing and search.
  4. For formats that do not support appending, monitor memory usage – shorten flush intervals or switch to appendable formats (CSV, XML, or XLS where supported).