Skip to content

import ​

Command: import
Category: Data Tools
Status: Production Ready

Description ​

Import data from CSV or Excel files into a database table. This is the complementary command to export.

Syntax ​

bash
hana-cli import [options]

Aliases ​

  • imp
  • uploadData
  • uploaddata

Command Diagram ​

Parameters ​

Positional Arguments ​

None.

Options ​

OptionAliasTypeDefaultDescription
--filename-nstring-Path to the CSV or Excel file to import.
--table-tstring-Target database table (format: SCHEMA.TABLE or TABLE).
--schema-sstring**CURRENT_SCHEMA**Import schema.
--output-ostringcsvFile format. Choices: csv, excel.
--matchMode-mstringautoColumn matching strategy. Choices: order, name, auto.
--truncate--trbooleanfalseTruncate target table before import.
--batchSize-bnumber1000Rows per batch insert (1-10000).
--worksheet-wnumber1Excel worksheet number (1-based).
--startRow--srnumber1Starting row number in Excel (1-based).
--skipEmptyRows--sebooleantrueSkip empty rows in Excel files.
--excelCacheMode--ecstringcacheExcel shared strings cache mode. Choices: cache, emit, ignore.
--dryRun--drbooleanfalsePreview import results without committing to the database.
--maxFileSizeMB--mfsnumber500Maximum file size in MB.
--timeoutSeconds--tsnumber3600Import operation timeout in seconds (0 = no timeout).
--nullValues--nvstringnull,NULL,#N/A,Comma-separated list of values to treat as NULL.
--skipWithErrors--swebooleanfalseContinue import even if errors exceed threshold.
--maxErrorsAllowed--meanumber-1Maximum errors allowed before stopping (-1 = unlimited).
--profile-pstring-CDS profile for connection.

Connection Parameters ​

OptionAliasTypeDefaultDescription
--admin-abooleanfalseConnect via admin (default-env-admin.json).
--conn-string-Connection filename to override default-env.json.

Troubleshooting ​

OptionAliasTypeDefaultDescription
--disableVerbose--quietbooleanfalseDisable verbose output (scripting mode).
--debug-dbooleanfalseDebug hana-cli with detailed output.

Special Default Values ​

TokenResolves ToDescription
**CURRENT_SCHEMA**Current user's schemaUsed as default for --schema.

Interactive Mode ​

In interactive mode, you are prompted for:

ParameterRequiredPromptedNotes
filenameYesAlwaysPath to CSV/Excel file.
tableYesAlwaysTarget table.
schemaNoAlwaysDefaults to current schema if omitted.
outputYesAlwaysFile format (csv/excel).
matchModeYesAlwaysColumn matching strategy.
truncateNoSkippedUse --truncate when needed.
batchSizeNoSkippedUse --batchSize to tune performance.
worksheetNoSkippedUse --worksheet for Excel.
startRowNoSkippedUse --startRow for Excel.
skipEmptyRowsNoSkippedUse --skipEmptyRows for Excel.
excelCacheModeNoSkippedUse --excelCacheMode for Excel.
dryRunNoSkippedUse --dryRun for preview.
maxFileSizeMBNoSkippedUse --maxFileSizeMB to cap file size.
timeoutSecondsNoSkippedUse --timeoutSeconds to cap runtime.
nullValuesNoSkippedUse --nullValues for custom NULLs.
skipWithErrorsNoSkippedUse --skipWithErrors to continue.
maxErrorsAllowedNoSkippedUse --maxErrorsAllowed to cap errors.
profileNoAlwaysOptional CDS profile.

Examples ​

bash
hana-cli import --filename data.csv --table myTable

Column Matching Strategies ​

Match Mode: "order" ​

Columns are matched by their position, regardless of column names:

text
File columns:  [ID, Name, Price]
Table columns: [PRODUCT_ID, PRODUCT_NAME, COST]
Mapping:       ID → PRODUCT_ID, Name → PRODUCT_NAME, Price → COST

Match Mode: "name" ​

Columns are matched strictly by name (case-insensitive):

text
File columns:  [ID, ProductName, Cost]
Table columns: [ID, PRODUCT_NAME, COST]
Mapping:       ID → ID, ProductName → PRODUCT_NAME, Cost → COST

Match Mode: "auto" (Default) ​

Attempts name-based matching first, then falls back to position matching for unmatched columns:

text
File columns:  [ID, ProductName, UnknownCol]
Table columns: [ID, PRODUCT_NAME, COST]
Mapping:       ID → ID, ProductName → PRODUCT_NAME, UnknownCol → COST

File Format Support ​

CSV Format Requirements ​

  • Standard comma-separated values
  • First row contains column headers
  • Values can be quoted (supports embedded commas and quotes)

Excel Format Requirements ​

  • .xlsx format (modern Excel)
  • First worksheet is used by default (specify others with --worksheet)
  • First row contains column headers (configurable with --startRow)

Data Type Handling ​

The import command converts data types based on the target table's column definitions:

  • Integer columns: Parsed from string/number to integer
  • Decimal/Numeric columns: Parsed from string/number to decimal
  • Date/Timestamp columns: Converted from string to date/time
  • Boolean columns: "true", "1", "yes" → true; "false", "0", "no" → false
  • Text columns: Stored as string
  • Null values: Empty cells or "NULL" strings become NULL

Output and Results ​

After import, the command displays a summary and (if present) a list of errors.

Limits and Constraints ​

  • File paths must resolve within the current working directory (path traversal is blocked).
  • Batch size must be between 1 and 10000.
  • Files larger than --maxFileSizeMB are rejected.

Additional Examples ​

bash
# Excel file import with name matching
hana-cli import -n ./data/sales.xlsx -o excel -t SALES_DATA -m name

# Truncate before import
hana-cli import -n ./data/refresh.csv -t MASTER_DATA --truncate

# Excel worksheet selection
hana-cli import -n report.xlsx -o excel -t MONTHLY_SALES --worksheet 2

# Large CSV file with higher batch size
hana-cli import -n transactions.csv -t TRANSACTIONS --batchSize 5000

Performance Considerations ​

Batch Size Selection ​

ScenarioRecommended Batch SizeRationale
Row size > 1KB500-1000Reduce memory pressure
Row size < 100 bytes5000-10000Maximize throughput
File size > 100MB500-1000Balance memory and performance
Memory-constrained (< 4GB)500Conservative approach

Excel Cache Mode Selection ​

ModePerformanceMemoryBest For
cache (default)FastestHigherMost files, < 100MB
emitSlowerLowerLarge files > 100MB
ignoreVariableLowestExtreme memory constraints

Error Handling ​

The command handles common error scenarios:

  • File not found or invalid file format
  • Column mismatch or missing required columns
  • Data type conversion errors
  • Database constraint violations

Permission Requirements ​

  • Read access to the source file
  • Write/Insert privileges on target table
  • If using --truncate: Delete privileges on target table

Tips & Best Practices ​

  1. Always backup before a truncate/import operation.
  2. Test first on a development table before production.
  3. Use auto matching for most files.
  4. Tune batch size for your environment.
  5. Use emit mode for very large Excel files.

See the Commands Reference for other commands in this category.

See Also ​