Skip to main content

Dashboard Overview

The Datashare dashboard is your export control panel. It displays all historical and active jobs, with options to:
  • Search by Job ID to find specific exports
  • Filter by status: Pending, Running, Completed, or Failed
  • View credits — your GB balance is shown in the top-right corner
  • Create Export — start a new export job

Create Export Workflow

The Create Export screen has a three-panel layout:
PanelPurpose
Schema Explorer (left)Select chain, dataset type, and fields
Filters & Destination (middle)Set date range, wallet/token filters, output format, and S3 destination
Preview & Export (right)View estimate, preview sample rows, and trigger the export
The workflow is: select data → scope with filters → choose destination → estimate → export.

Step 1: Select Your Data

Choose a chain — one chain per export job. See Supported Chains for the full list. Choose a dataset — see Supported Data for available types (Token Transfers, Native Transfers, NFT Transfers, Swap Events, Liquidity Events, plus raw data). Select fields — after choosing a dataset, expand the field list and select only the fields you need. More fields increase export size and GB consumption proportionally.
DataShare exports raw on-chain data. Token names, symbols, logos, spam labels, and metadata enrichment are not included. Plan for separate metadata enrichment post-export if needed.

Step 2: Apply Filters

Set date range, wallet address, and token address filters to control the scope and cost of your export. See Filters & Scoping for full details.

Step 3: Choose Destination & Format

Output Format
FormatBest ForNotes
ParquetAthena, Spark, DuckDB, BigQueryColumnar, highly compressed (5–10x). Recommended for analytics.
CSVExcel, general compatibilityLarger files than Parquet. Compresses well with gzip.
JSONDebugging, human inspectionLargest output. Useful for spot-checking data.
S3 Destination Select a saved destination or add a new one. Destination profiles are reusable across future jobs. See S3 Bucket Setup for configuration instructions, or Export Options for all supported providers.

Step 4: Estimate

Click Estimate before triggering the export. This gives you:
  • The GB of credits the export will consume
  • A sample row preview to verify your schema
Estimates are free and can be run as many times as needed.

Step 5: Export

Once you click Export, the system locks your configuration and begins processing.
There is a 5-minute export window after clicking Export. Top up credits and finalize your S3 configuration before this step. Navigating away or session timeout during this window may require re-running the estimate.

Your First Export Recipe

Use this minimal configuration to validate the end-to-end flow without significant credit spend:
SettingValueRationale
ChainEthereumHighest activity; good scale test
DatasetToken TransfersMost commonly used; well-understood schema
Date RangeLast 24 hoursSmallest reasonable validation window
Wallet Address1 recognized addressVerify output matches known activity
Fieldsfrom, to, value, token_address, block_timestampMinimal schema confirming data delivery
FormatParquetSmallest files; compatible with DuckDB/Athena
After exporting:
  1. Check your S3 bucket for output files
  2. Query locally with DuckDB:
SELECT * FROM read_parquet('*.parquet') LIMIT 10;
  1. Or use Athena to query directly from S3
  2. Confirm rows match the expected wallet activity