Feature: Batch Import

Feature: Batch Import

Batch import allows users to efficiently add jobs and customers by importing CSV and xlsx files. This feature proves invaluable when dealing with substantial data volumes.

Although seemingly straightforward, creating this feature demanded significant effort and evolved through multiple versions. This complexity arises from the ground-up development and the use of multiple unique innovations to achieve a deceptively simple yet powerful solution.

Key Functions include:

- Quick drag-and-drop importing

- No data mapping needed if correctly set up

- Single file use for complex data import

- Advanced row merging and grouping support

Challenges Addressed:

- "Frequent need to create a large number of jobs, manual entry is overly time-consuming"

- "Migration of numerous customer records for new clients, often exceeding 7000+ records"

- "Requirement to transfer data between various ERP systems and the Delivery platform"

I admit that this feature wasn't initially part of the initial design and required time to comprehend its significance and user need.

"At its core, batch import facilitates data transfer between diverse systems."

Most clients operate their own ERP or order management system to retain order data. Once integrated, these orders must transfer to the Delivery system for fulfillment by drivers and eventual synchronization.

Several common approaches exist, including:

- REST API

- Batch import

- Automation systems (e.g., Zapier integration)

- Plugin integration

- Webhooks

For smaller clients with limited resources, batch import emerges as a prevalent method for data transfer.

After researching the best batch import functions available, I identified fundamental requirements for this feature to succeed, such as:

- Support for various formats (e.g., csv, xlsx)

- Drag-and-drop and direct file selection

- Data mapping

- Data validation and formatting

- Data parsing and compilation

- Error handling

Importing jobs can often be perplexing, particularly when data mapping is necessary or errors occur. Hence, my primary objective was to minimize unnecessary steps, ensuring simplicity for users while concealing the underlying intricate logic.

Challenge 1: Can the Data Mapping Step be Skippable?

If you've utilized the batch import function previously, you likely understand that data mapping is both crucial and perplexing. This step is vital because the system needs to determine which columns in your file correspond to specific purposes, necessitating correct mapping of each column. Incorrect mapping can lead to errors or the import of inaccurate data.

To potentially bypass this step, the system must possess the intelligence to associate columns with particular data. Achieving this involves the system retaining a record of this association.

My approach is straightforward: during batch import, the system scans all existing columns in the file and cross-references them with an established mapping record. If successful matches are found, this step can be automatically omitted from the user interface (UI), rendering it invisible. However, if certain columns lack a match, the data mapping UI will activate, prompting the user to map the column fields to resolve the issue.

Several options exist for storing this mapping record:

- Browser cache

- Browser local storage

- Backend database

We opt for the last choice, ensuring every user within that account can access the mapping record. Typically, a single mapping profile suffices for a client's entire company due to consistency. Thus, this proves the optimal selection among available choices.

In scenarios where a customized file contains different column names, the data mapping screen will automatically appear, enabling you to save it as a new record.

Challenge 2: A Complex Data Structure for Importing

In contrast to most batch import functions available, the unique data structure of the Delivery product demands a more sophisticated design and architecture to operate effectively.

Consider these examples:

- Importing jobs for both single-stop and multiple stops

- Importing various todos with specific sequences

- Importing jobs with multiple items

These requirements necessitate row merging instead of employing potentially limitless columns per row. The introduction of row merging, however, results in intricate and challenging logic.

Thankfully, through meticulous planning and implementation, all these complexities were adeptly managed.

To elucidate the technical details and logic behind each column, I compiled a Google document. This document served as the definitive source of information during the development of this function by the team.

🔗 Link: Google Doc ➜

HF Images

Batch Import Examples

1️⃣ Import a Pickup-Dropoff (P-D) Job with a system auto-generated Order Number 👇

Logic: The prefixed "AUTO…" keyword triggers the system to auto-generate order numbers. In this case, the value after the "AUTO" keyword groups rows together.

2️⃣ Import a Pickup-Dropoff (P-D) job with a customized order number 👇

Logic: Rows are grouped into one job using the "Order Number," while the "Stop Type" specifies the type of stop to create. A "P" in the "Stop Type" column designates the row as a pickup stop.

3️⃣ Import a Single-Stop Job with multiple Items inside 👇

Logic: Rows are grouped into one job using the "Order Number," and the "Stop No" facilitates merging rows into a single stop. The same "Stop No" value merges rows on the item level.

4️⃣ Import a Single-Stop Job with Multiple Todos (both stop-level and item-level) in Sequence 👇

Logic: The "Stop Todos" and "Item Todos" columns dictate todos to be created. Each todo has a fixed ID, and a series of numbers indicate the sequence.

e.g. "1225" in "Stop Todos" will arrange todos on the Stop level in the sequence.

1. Signature [id:1]

2. POD [id: 2]

3. POD [id: 2]

4. Note [id: 5]

The same rules apply to "Item Todos".

5️⃣ Creating 3 jobs with different geocoding methods (GPS, Lat&Lng, Address Info) 👇

Logic: Various fields can be used to geocode an address, from existing "Customer ID" as fallback to actual address line information. The file imports three jobs using different geocoding methods:

- IMP-GEO-GPS: using GPS only

- IMP-GEO-LAT-LNG: using separated coordinates

- IMP-GEO-ADDRESS: using address line information

Note: Smart caching minimizes Google geocoding API calls and reduces costs.

Challenge 3: Error Handling

This is another substantial challenge that demands our attention. Given the nature of the batch import function (data transfer), any mishap can lead to significant issues. For instance, incorrect GPS data might direct drivers miles away from the intended location, or valuable items could end up being delivered to the wrong recipient.

Due to these potential pitfalls, we treat this matter with utmost seriousness. Our objective is to promptly alert users when errors occur, providing them with clear details on what went wrong and where corrections are needed. Ideally, users should be able to continue importing tasks and address errors or incorrect data at a later point.

Upon encountering an error, whether due to invalid data types or geocoding failure, the system will highlight the problematic rows and cells, guiding the user through the rectification process. Importantly, users can proceed with the import, and all erroneous data will be automatically downloaded in the original file format. This feature proves highly advantageous, particularly when importing a significant volume of data, like 7000 customer records. It prevents valuable time from being wasted during the validation process.

We have planned further trials and enhancements to refine error handling (on the roadmap until August 2023).

We explored the possibility of allowing users to rectify incorrect data during the import process, without the need to make alterations within the original file.

🤔 Retrospective

This feature, one among several I've worked on, demanded not only excellent design thinking but also a deep understanding of the technical intricacies required to ensure functionality.

Leveraging my prior coding experience, I comprehended and executed the entire working process, extending beyond UI design. Collaborating with team members, I delved into how data was processed, parsed, and updated, tracing its trajectory to the database.

While we succeeded in making this feature operational, its complexity could be overwhelming for entry-level users initially. Fortunately, most clients have established file structures, requiring only essential functions. Our support sets up a functional template, which clients can replicate with ease. And should advanced functions be necessary, a pathway for implementation exists.

Simultaneously, this feature remains an evolving endeavor, consistently advancing to deliver an improved experience day by day.

Posted on Aug 19, 2023
NING
Integrates visual, user, and logic into elegant solutions

More by NING

View profile