import LightsparkGrid from '@lightsparkdev/grid';
const client = new LightsparkGrid({
username: process.env['GRID_CLIENT_ID'], // This is the default and can be omitted
password: process.env['GRID_CLIENT_SECRET'], // This is the default and can be omitted
});
const response = await client.customers.bulk.uploadCsv({
file: fs.createReadStream('path/to/file'),
});
console.log(response.jobId);{
"jobId": "Job:019542f5-b3e7-1d02-0000-000000000006",
"status": "PENDING"
}Upload a CSV file containing customer information for bulk creation. The CSV file should follow a specific format with required and optional columns based on customer type.
The CSV file should have the following columns:
Required columns for all customers:
Required columns for individual customers:
Required columns for business customers:
Optional columns for all customers:
Optional columns for individual customers:
Optional columns for business customers:
umaAddress,platformCustomerId,customerType,fullName,birthDate,addressLine1,city,state,postalCode,country,platformAccountId,businessLegalName
john.doe@uma.domain.com,customer123,INDIVIDUAL,John Doe,1990-01-15,123 Main St,San Francisco,CA,94105,US
acme@uma.domain.com,biz456,BUSINESS,,,400 Commerce Way,Austin,TX,78701,US
The upload process is asynchronous and will return a job ID that can be used to track progress.
You can monitor the job status using the /customers/bulk/jobs/{jobId} endpoint.
import LightsparkGrid from '@lightsparkdev/grid';
const client = new LightsparkGrid({
username: process.env['GRID_CLIENT_ID'], // This is the default and can be omitted
password: process.env['GRID_CLIENT_SECRET'], // This is the default and can be omitted
});
const response = await client.customers.bulk.uploadCsv({
file: fs.createReadStream('path/to/file'),
});
console.log(response.jobId);{
"jobId": "Job:019542f5-b3e7-1d02-0000-000000000006",
"status": "PENDING"
}API token authentication using format <api token id>:<api client secret>
CSV file containing customer information
Was this page helpful?