Getting Started with DataStar (Early Adopter Phase)

Getting Started with DataStar (Early Adopter Phase)

DataStar is Optilogic’s new AI-powered data product designed to help supply chain teams build and update models & scenarios and power apps faster & easier than ever before. It enables users to create flexible, accessible, and repeatable workflows with zero learning curve—combining drag-and-drop simplicity, natural language AI, and deep supply chain context.

Today, up to an estimated 80% of a modeler's time is spent on data—connecting, cleaning, transforming, validating, and integrating it to build or refresh models. DataStar shrinks that time by up to 50%, enabling teams to:

  • Answer more questions faster
  • Unlock repeatable value across business review
  • Focus on strategic decisions, not data wrangling

The 2 main goals of DataStar are 1) ease of use, and 2) effortless collaboration, these are achieved by:

  • Providing AI-powered, no-code automation with deep supply chain context
  • Supporting drag-and-drop workflows, natural language commands, and advanced scripting (SQL/Python)
  • Full integration into the Optilogic platform: users can prep data, trigger model & scenario runs, and push insights to apps or dashboards
  • Enabling scalable, collaborative, cloud-native modeling for repeatable decision-making at speed

DataStar is currently in the Early Adopter (EA) phase and is rapidly evolving while we work towards a General Availability release later this year. Therefore, this documentation will be regularly updated as new functionality becomes available. If you are interested in learning more about DataStar or the Early Adopter program, please contact the Optilogic support team at support@optilogic.com.

In this documentation, we will start with a high-level overview of the DataStar building blocks. Next, creating projects and data connections will be covered before diving into the details of adding tasks and chaining them together into macros, which can then be run to accomplish the data goals of your project.

DataStar Overview

Before diving into more details in later sections, this section will describe the main building blocks of DataStar, which include Data Connections, Projects, Macros, and Tasks.

As DataStar is currently in the Early Adopter phase, this document will be updated regularly as more features become available. In this section, references to future capabilities which are not yet released are included in order to paint the larger picture of how DataStar will work. In the text it is made clear which parts are and which are not yet available in the first Early Adopter release.

Data Connections

Since DataStar is all about working with data, Data Connections are an important part of DataStar. These enable users to quickly connect to and pull in data from a range of data sources. Data Connections in DataStar:

  • Are global to the DataStar application – meaning each project within DataStar can use any of the data sources that have been set up as Data Connections.
  • Can also be set up from within a DataStar project – they then become available for use in other DataStar projects too.
  • Can be of the following types (the last 6 indicated with an * are not yet available in the Early Adopter program):
    • Postgres – an open-source relational database management system that supports both SQL and JSON querying
    • CSV Files – files containing data in the comma separated values format, which can be created by and opened in Excel
    • Cosmic Frog Models – a Cosmic Frog model which is a Postgres database using a specific data schema called Anura. Often the projects in DataStar will populate Cosmic Frog model input tables to build complete models that are ready to be run by one of the Cosmic Frog engines and/or read in Cosmic Frog output tables for output analysis
    • Excel* – spreadsheet editor developed by Microsoft for Windows
    • MySQL* – an open-source relational database management system that supports SQL querying
    • SQLite* – an open-source relational database engine used as a library in applications
    • OneDrive* – cloud storage server provided by Microsoft
    • ODBC Connection* – a standard way for applications to connect to various databases. This means that if your data source is not one of the types listed here, you may still be able to connect to it if the target database has a specific ODBC driver available
    • Snowflake* - a cloud-based data platform that provides a data warehouse as a service (DWaaS)

Projects, Macros, and Tasks

Projects are the main container of work within DataStar. Typically, a Project will aim to achieve a certain goal by performing all or a subset of importing specific data, then cleansing, transforming & blending it, and finally publishing the results to another file/database. The scope of DataStar Projects can vary greatly, think for example of following 2 examples:

  • Cleanse and filter a specific set of historical supply chain data.
  • Build a Cosmic Frog model from scratch using the raw data from the data sources available in DataStar’s Data Connections, then run the model, analyze its outputs, and finally generate reports at the desired level of aggregation.

Projects consist of one or multiple macros which in turn consist of 1 or multiple macros and/or tasks. Tasks are the individual actions or steps which can be chained together within a macro to accomplish a specific goal. In future, multiple macros can also be chained together in another macro in order to run a larger process. Tasks are split into the following 3 categories in DataStar:

  • Transform – using these tasks user can convert the data from their raw state in the Data Connections to the clean and predefined format they desire. These tasks will include those that can import, export, select, group, delete, update, pivot and unpivot data. For the Early Adoptor program, the import transform task is initially available.
  • Execute & Automate – these tasks aim to make users as productive as possible by allowing them to run Cosmic Frog models, SQL or Python code, and other Macros as part of a Macro. Notifications can also be sent to alert users that a certain Macro has completed. Just the Run SQL task is available in the first version for the Early Adopter program.
  • AI Agents – in future updates of DataStar, these tasks will be able to perform common tasks using artificial intelligence. Think of automatically comparing scenario outputs or filling out missing data in input tables.

The next screenshot shows an example Macro called Shipments which consists of 7 individual tasks that are chained together to create transportation policies for a Cosmic Frog model from imported Shipments and Costs data. As a last step, it also runs the model with the updated transportation policies:

Note that not all tasks to build a macro like this are yet available in the current Early Adopter version of DataStar.

Project Sandbox

Every project by default contains a Data Connection named Project Sandbox. This data connection is not global to all DataStar projects; it is specific to the project it is part of. The Project Sandbox is a Postgres database where users generally import the raw data from the other data connections into, perform transformations in, save intermediate states of data in, and then publish the results out to a Cosmic Frog model (which is a data connection different than the Project Sandbox connection). It is also possible that some of the data in the Project Sandbox is the final result/deliverable of the DataStar Project or that the results are published into a different type of file or system that is set up as a data connection rather than into a Cosmic Frog model.

How Data Connections, Projects, and Macros Relate to Each Other

The next diagram shows how Data Connections, Projects, and Macros relate to each other in DataStar:

  1. In this example, there are 7 Data Connections configured in DataStar, see the rectangle with green background on the left:
    1. A OneDrive connection called Historical Data (OneDrive connections are not yet available in the current Early Adopter DataStar version)
    2. A Snowflake connection called Enterprise Data (OneDrive connections are not yet available in the current Early Adopter DataStar version)
    3. A Postgres connection called Location Data
    4. A CSV connection called Cost Data
    5. A CSV connection called Capacity Data
    6. A Cosmic Frog connection called Neo NA Model
    7. A Cosmic Frog connection called Global Model
  2. Note that the 2 Cosmic Frog connections displayed here on the right-hand side are the same 2 as shown in the list on the left, they are just repeated in the diagram to facilitate explaining the flow of data.
  3. There are 2 projects set up in DataStar, see the 2 rectangles with blue background in the middle:
    1. Project 1 creates Policies tables for the Cosmic Frog model named Neo NA Model, a network optimization model in the Northern Americas geography.
    2. Project 2 builds, runs, and analyzes a complete Cosmic Frog model named Global Model from raw data.
  4. Looking at Project 1, we see that:
    1. It uses 3 of the 7 Data Connections available (blue arrows):
      1. Two to pull data in from: the Historical Data, and Cost Data connections.
      2. One to publish data into: the Neo NA Model.
    2. It has its own Project Sandbox as an additional Data Connection which is specific to this project only.
    3. It contains 3 macros: Shipments, Production, and Inventory. The Shipments macro can look similar to the example one seen in the previous screenshot.
    4. The 3 macros pull data from the Historical Data, Cost Data, and Project Sandbox connections.
    5. The 3 macros publish data into the Project Sandbox and the Neo NA model connections. The completed Transportation Policies, Production Policies, and Inventory Policies tables are published into the Cosmic Frog model.
  5. Similarly, looking at Project 2, we follow the yellow arrows to understand which Data Connections are used to pull data from and publish data into. Note that the Global Model connection is used to publish results into by the “Publish to Model” macro which populates the model’s input tables and it is also used as a connection to pull data from for the “Output Analysis” macro after the model has run to completion.

Early Adopter Development Note

For the remainder of this document, only current Early Adopter DataStar functionality is shown in the screenshots (with a few exceptions, which will be noted in the text). The text mostly just covers current functionality and will at times reference features which will be included in future DataStar versions. Within DataStar, users may notice buttons, options in drop-down and right-click menus that have been disabled (greyed out or cannot be clicked on), since new functionality is being worked on continuously. These will be enabled over time and other new features will also gradually be added.

Creating Projects & Data Connections

On the start page of DataStar user will be shown the existing projects and data connections. They can be opened, or deleted here, and users also have the ability to create new projects and data connections on this start page.

The next screenshot shows the existing projects in card format:

  1. When logged into the Optilogic platform, click on the DataStar icon in the list of available applications on the left-hand side to open DataStar. Your DataStar icon may be in a different location in the list, and if it is not visible at all, then click on the icon with 3 horizontal dots to show any applications that are not shown currently.
  2. We are on the Projects tab of the start page in the DataStar application.
  3. The projects are shown in card format (the left icon); the other option is to show them as a list (the right icon).
  4. When hovering over a project, the options to edit the project (rename it and/or update its description) and to delete the project become visible. When clicking on the delete project icon, a message asking user to confirm they want to delete the project comes up before actually deleting it.
  5. Users can quickly search the list of projects by typing in the Search text box and projects containing the text will be filtered out.

New projects can be created by clicking on the Create Project button in the toolbar at the top of the DataStar application:

  1. User clicked on the Create Project button which opened the Create Project form.
  2. Here, a Project Name can be entered.
  3. Optionally, user can write a Project Description.
  4. Under Project Type, user can currently just create a new Empty Project.
  5. Click on the Edit button to change the project’s appearance by choosing an icon and color.
  6. Click on the Add Project button to create the project.
  7. Note that on the right-hand side, Help for the currently open DataStar form is shown.

The next screenshot shows the Data Connections that have already been set up in DataStar in list view:

  1. We are on the Data Connections tab of the start page in the DataStar application.
  2. The Data Connections are shown in list format (right icon); the other option is to show them in card format (left icon) similar to the screenshot above of the Projects in card format.
  3. For each Data Connection we see the following details in the list: Name, Connection Type, Description, Created At, Owner, Last Edited, and Actions. Clicking on the column headers sorts the table by that column in ascending order, clicking again will sort in descending order, and clicking a third time takes the sort of. Holding both the Shift and Ctrl buttons down and next clicking on multiple column headers will sort the table by those multiple columns.
    1. Note that when hovering over the Actions field in a data connection row, icons to rename and delete the connection become visible. When users click on the delete icon, a message asking the user to confirm they want to delete the data connection comes up before actually deleting it.
  4. Users can quickly search the list of data connections by typing in the Search text box and connections containing the text will be filtered out.

New data connections can be created by clicking on the Create Data Connection button in the toolbar at the top of the DataStar application:

  1. The Create Data Connection form has been opened by clicking on the Create Data Connection button.
  2. First, a Data Connection Name needs to be entered.
  3. Optionally, user can write a Connection Description.
  4. The type of connection can be chosen from the Connection Type drop-down list. See the “Data Connections” section further above for a full list of connection types and a short description of each.

The remainder of the Create Data Connection form will change depending on the type of connection that was chosen as different types of connections require different inputs (e.g. host, port, server, schema, etc.). In our example, the user chooses CSV Files as the connection type:

  1. The Connection Type is now showing CSV Files per the selection user made.
  2. There are 2 options to select the CSV source file:
    1. The CSV file to be used for the Data Connection can be dragged and dropped onto this “Drag and drop” area from user’s computer. It will then be uploaded to the user’s /MyFiles/DataStar folder on the Optilogic platform. In case a file of the same name already exists in that location, it will be overwritten.
    2. User can browse the list of CSV files that exist in their Optilogic account already (not limited to files under /MyFiles/DataStar, will show all CSV files in their account) to select one as the source for the data connection. Note that:
      1. We can quickly find files of interest by typing in the Search box at the top of the list to filter out any files containing the typed text in their names.
      2. In case not all 3 columns shown in the screenshot are visible, users can scroll right to also see the location in user’s Optilogic workspace of the file.
      3. Options to customize the grid to users' needs (e.g. sorting, changing the order of columns, etc.) are explained in the Appendix.
  3. After selecting the CSV file to be used for the Data Connection, users can click on the Add Connection button to create the new data connection.

In our walk-through here, the user drags and drops a Shipments.csv file from their local computer on top of the Drag and drop area:

  1. User dragged and dropped their local Shipments.csv file in the “Drag and drop” area.
  2. Once the upload of the file is finished, a message in green font indicating the upload completed successfully is shown.
  3. The Shipments.csv file is now listed in the list of CSV files the user has available in their Optilogic account. As expected, the location of this file is /MyFiles/DataStar. Click on the file in the list to select it.
  4. User can then click on the Add Data Connection button to create the connection.

Inside a DataStar Project

Now let us look at a project when it is open in DataStar. We will first get a lay of the land with a high-level overview screenshot and then go into more detail for the different parts of the DataStar user interface:

  1. At the top of the DataStar application, users will find a toolbar:
    1. Clicking on the icon all the way to the left will take user back to DataStar’s start page where the lists of existing projects and data connections are shown, see also the previous section “Creating Projects & Data Connections”.
    2. The left part of the toolbar contains from left to right:
      1. Create Macro button: click on this button to create a new macro.
      2. Data Connections drop-down menu: options in the menu are to create a new data connection and, in future, to upload data.
      3. Manage Variables button: in future, this button will be enabled so users can pass in values that can be used/updated in their macros.
    3. The right part of the toolbar gives users quick options to access Leapfrog AI and to run macros.
  2. In the pane on the left-hand side of the application, either the list of Macros that the project contains (left tab) or the list of available Data Connections (right tab) is shown. In this screenshot, the Macros tab is the active tab.
    1. Macros can be expanded/collapsed; when expanded you see a list of all the tasks/macros that make up the macro. This will be shown in more detail below.
    2. Likewise, data connections can also be expanded/collapsed; when expanded you see the available schemas for database connections and (for all connection types) the tables contained in the data connection.
  3. In the pane on the right-hand side of the application, there are 3 tabs, from left to right:
    1. Tasks – here tasks from the Transform, Execute & Automate, and AI Agents categories can be chosen and dragged and dropped onto the Macro Canvas (the central part of the DataStar application) to add them to the currently active macro. Currently, Import (Transform category) and Run SQL (Execute & Automate category) tasks are available, more tasks will be gradually added.
    2. Configuration – the specific configuration parameters for the currently selected task can be set or updated here.
    3. Leapfrog – start or continue a conversation with Leapfrog here. Use natural language prompts, and Leapfrog will configure tasks for you!
  4. The central part of DataStar is called the Macro Canvas. Tasks can be dragged and dropped onto here and then connected to each other to build out a macro that will accomplish a specific data process. The macro canvas becomes active when user clicks on a macro or one of its tasks in the Macros tab on the left. The macro name is also listed in the tab at the top of the canvas.
  5. Tables present in any of the Data Connections can also be shown in the central part of DataStar by clicking on them in the Data Connections tab. This shows as an additional tab across the top of the macro canvas. Multiple macros and tables can be opened here at the same time, and users can switch between them by clicking on the tab of the macro/table they wish to show.
  6. At the bottom of the Macro Canvas, 2 tabs are showing:
    1. Logs – here it is tracked which task was run when and if it completed successfully.
    2. Task Results – this will show the resulting table of the currently selected task; this functionality is not yet included in the Early Adopter release.
  7. The 3 panes on the left-hand side, right-hand side, and to the bottom of the Macro Canvas can all be collapsed and expanded as desired. This can be done by clicking on the icons with the 2 greater than/less than signs, or 2 arrowheads pointing up/down.

Macros Tab

Next, we will dive a bit deeper into a macro:

  1. The macro named “Customers from Shipments” is selected on the Macros tab on the left-hand side panel of DataStar. Clicking on a macro in the Macros tab will also open it in the macro canvas.
  2. The macro has been expanded, so we see the list of tasks that are part of this macro. Users will note that:
    1. By default, each macro has a task named Start, which has its own specific icon and blue color. This task cannot be removed or renamed and the first actual task of the macro will be connected to it.
    2. Tasks from the Transform category have light blue icons associated with them, and those from the Automate & Execute category are green. The icon itself also indicates the type of task it is. For example, the “Import Raw Shipments” task is an Import task from the Transform category, and the “Create Unique Customers” task is a Run SQL task from the Automate & Execute category.
    3. Right-clicking on a Macro or a Task will bring up a context menu which can be used to Rename or Delete the Macro or Task.
  3. Use the Search text box to quickly find a macro/task whose name contains the typed text.
  4. This button can be used to expand or collapse all macros with one click.
  5. Click on the Create Macro button in the toolbar to add a new Macro to the project.

Macro Canvas

The Macro Canvas for the Customers from Shipments macro is shown in the following screenshot (note that the Export task shown is not yet available in the Early Adopter release):

  1. The tab tells us which macro we are looking at. Note that multiple macros can be opened here in multiple tabs and users can easily switch between them by clicking on the tab of the desired macro.
  2. The canvas currently shows 3 of the tasks that are part of the Customers from Shipments macro. The bottom part of a task contains the name and the top colored part of a task shows what type of task it is. For example:
    1. The task at the top connected to Start is an Import task from the light blue Transform category; its name is “Import Raw Shipments”.
    2. The task at the bottom left is a Run QSL task from the green Execute & Automate category; its name is “Create Unique Customers”.
  3. Tasks can be dragged and dropped onto the canvas from the Tasks list in the right-hand side pane. Once on the canvas, users can connect tasks by clicking in the middle of the right edge of the first task, holding the mouse down, and then clicking in the middle of the left edge of the next task. Please note that:
    1. DataStar helps users by showing a bigger circle when hovering over the middle of a left or right edge of a task.
    2. Tasks can be connected to multiple other tasks. If there are for example 2 tasks connected to a third task that succeeds the first 2, then this third task will not execute until both preceding tasks have completed.
    3. To delete a line that connects 2 tasks: click on the line (it will then become a dotted orange line), and then hit the Delete or Backspace key. Alternatively, right-click on the line and select Delete from the context menu that comes up.
  4. In the left bottom corner of the canvas users have access to the following controls, from top to bottom:
    1. Zoom in: clicking on this plus icon will increase the size of the tasks on the canvas, less of the total macro will be visible.
    2. Zoom out: clicking on this minus icon will decrease the size of the tasks on the canvas, more of the total macro will be visible.
    3. Fit view: clicking on the icon with 4 square corners will set the position and zoom-level of the canvas such that all tasks/macros that are part of the macro will be shown on the canvas, using up as much of the canvas space as possible.
    4. Toggle interactivity: not currently used.
  5. The grey rectangle at the right bottom of the canvas shows a small diagram of where all the tasks that are part of the macro are positioned. The smaller white rectangle within this grey rectangle indicates which part of the entire macro the canvas is showing currently. This is helpful when you have a macro with many tasks and you want to pan through it while it is zoomed in.

In addition to the above, please note following regarding the Macro Canvas:

  1. Clicking on a task in the canvas does several things:
    1. Selects the task (i.e. highlights it) in the macro(s) it is part of in the Macros tab on the left-hand side pane.
    2. Opens the Configuration of the task in the right-hand side pane.
    3. In a future update, it will also show the results of the most recent time the task was run in the Task Results tab in the pane at the bottom of the Macro Canvas.
  2. Hovering over a task will make Run Task and Delete icons visible:
  1. . When clicking on the left Run Task icon, the task is run by itself immediately. Its progress can be monitored in the Logs tab at the bottom of the macro canvas and in the Run Manager application on the Optilogic platform. After clicking on the Delete Task icon on the right, a confirmation message to ensure the user wants to delete the task will come up first before the task is removed.
  2. Users can position the canvas as they desire by clicking on it, holding the mouse down and then moving the mouse to drag the canvas in any direction.
  3. Users can also zoom in/out on the canvas by using the mouse or 2 fingers on a trackpad (move closer to each other to zoom out and move further apart to zoom in).

Tasks Tab

We will move on to covering the 3 tabs on the right-hand side pane, starting with the Tasks tab:

  1. We are on the Tasks tab on the right-hand side pane in DataStar.
  2. As previously discussed, there are initially 2 task categories:
    1. Transform – these tasks can be used to perform often used data actions and currently just includes the Import task (Export coming soon!).
    2. Execute & Automate – these tasks aim to make users more productive by incorporating automation and currently only includes the Run SQL task.

Users can click on a task in the tasks list and then drag and drop it onto the macro canvas to incorporate it into a macro.

Configuration Tab of a Task

When adding a new task, it needs to be configured, which can be done on the Configuration tab. When a task is newly dropped onto the Macro Canvas its Configuration tab is automatically opened on the right-hand side pane. To make the configuration tab of an already existing task active, click on the task in the Macros tab on the left-hand side pane or click on the task in the Macro Canvas. The configuration options will differ by type of task, here the Configuration tab of an Import task is shown as an example:

  1. We are on the Configuration tab on the right-hand side pane in DataStar.
  2. The description of the type of task that was selected is shown here, in this case of the Import task in the Transform category.
  3. The name of the task can be entered here.
    1. Once the task name has been saved, it is also listed at the top of the configuration form.
  4. The Data Connection section needs to be configured.
    1. For each section within a task configuration, there is an indicator telling user the status of this section of the configuration. Here the green check mark indicates the Data Connection section of the task configuration has been completed. When this icon is orange, it means the configuration is not yet finished.
    2. Sections within a configuration can be expanded/collapsed by clicking on the down/up caret icon.
  5. Within the Data Connection configuration section, first the Source is specified:
    1. Select the connection that will function as the source for the import task from the drop-down list containing the data connections set up in the project. Cosmic Frog models and CSV File connections can be used as the source for an Import task.
    2. For data connections with multiple tables (such as a Cosmic Frog model), users can select the table to use as the source from the drop-down list, which also shows how many records each table contains. In our example here, we are using the Shipments connection, which is a CSV file, so the 1 table in this file is used, and users do not need to select anything from the drop-down list.
    3. If a new data connection that is not yet part of the project is to be used as the source, users can click on the plus icon to add a new Data Connection.
  6. Next, the Destination of the import task is configured:
    1. Select the connection that will function as the destination for the import task from the drop-down list. This list will contain the Postgres data connections (including the Project Sandbox and Cosmic Frog models) which are set up in the project. Oftentimes, the Project Sandbox will be the destination connection for Import tasks as the imported data will almost always still need to be cleansed, validated, and blended before reaching its final state.
    2. Enter the name of the new table to be created in the destination data connection.
    3. If a new data connection that is not yet part of the project is to be used as the destination, user can click on the plus icon to add a new Data Connection.

Please note that:

  • The table name is set to RawShipments in the configuration, and it will be imported to the Project Sandbox as a table named rawshipments, so the name is converted to all lowercase.
  • If there are spaces in the column names in the CSV file, these will be replaced by underscores when importing into the Project Sandbox. Special characters like parentheses in column names are removed. For example a column named Distance (MI) is imported as distance_mi.

Leapfrog Tab

Leapfrog in DataStar (aka D*AI) is an AI-powered feature that transforms natural language requests into executable DataStar tasks. Users can describe what they want to accomplish in plain language, and Leapfrog automatically generates the corresponding task or SQL query without requiring technical coding skills or manual inputs for task details. This capability enables both technical and non-technical users to efficiently manipulate data, build Cosmic Frog models, and extract insights through conversational interactions with DataStar.

Note that there are 2 appendices at the end of this documentation where 1) details around Leapfrog in DataStar's current features & limitations are covered and 2) Leapfrog's data usage and security policies are summarized.

  1. Leapfrog can be accessed by clicking on the “How can I help you” text bubble or the frog icon in the toolbar at the top of DataStar, or by clicking on the Leapfrog tab on the right-hand side pane.
  2. User can type a prompt into the “Write a message…” free type text box. Here user is asking to create unique customers from the destination stores that are present in the rawshipments table, which was imported into the Project Sandbox. Extra instructions to average the latitude and longitude if there are multiple records for the same destination store are given in order to calculate a latitude and longitude for each customer.
  3. Hit enter or click on the blue Send icon on the right to submit the prompt.
  4. There is a "Conversations" pane on the left when the Leapfrog tab is active. This pane can be expanded by clicking on the icon with the 2 greater than signs. Previous Leapfrog conversations will then be shown in the pane, so user can go back to these. This pane will be discussed in more detail further below.

Leapfrog’s response to this prompt is as follows:

  1. The prompt submitted by the user is listed at the top.
  2. Leapfrog first describes in natural language what it has done in response to the prompt.
  3. It is creating a Run SQL task named “Create customers table” as the response to the prompt.
  4. The Data Connection section lists that the target connection is the Project Sandbox.
  5. In the SQL Script section, the SQL query that will be executed if adding this task as a Run SQL task to a macro is shown.
    1. User can click on this expand icon to show the SQL Query in a bigger Code Editor window. The complete SQL Query reads:
DROP TABLE IF Exists customers;
CREATE TABLE customers AS 
SELECT destination_store AS customer, AVG(destination_latitude) AS latitude, AVG(destination_longitude) AS longitude FROM rawshipments 
GROUP BY destination_store
  1. Clicking on the “Add to Macro” button will add a Run SQL task named “Create customers table” with this configuration to the Macro Canvas.

Finally, we will have a look at the Conversations pane:

  1. The icon with 2 greater than signs was clicked on to open the Conversations pane, which is now visible on the left. The icon has changed to 2 less than signs which can be used to collapse this pane again.
  2. Previous Leapfrog conversations are listed in the Conversations list. Clicking on a conversation will open it on the right hand-side. Users can review the previous prompts and responses, decide to add any Run SQL tasks Leapfrog generated to their macro if they were not added before, or continue the conversation where it was left off.
  3. When hovering over a conversation in the list, 2 icons become visible. These can be used to 1) rename the conversation (by default its name is the text of the first prompt in the conversation) and 2) to delete the conversation (user will get a confirmation message before the conversation is deleted).
  4. The "+ New Conversation" button can be used to start a new blank conversation.

Within a Leapfrog conversation, Leapfrog remembers the prompts and responses thus far. User can therefore build upon previous questions, for example by following up with a prompt along the lines of “Like that, but instead of using a cutoff date of August 10, 2025, use September 24, 2025”.

Additional helpful Leapfrog in DataStar links:

Running a Macro

Users can run a Macro by selecting it and then clicking on the green Run button at the right top of the DataStar application:

  1. The “Customers from Shipments” macro is open and is also selected in the Macros tab on the left-hand side pane (not shown).
  2. The green Run button is enabled and clicking this will immediately kick off the macro run. Its progress can be monitored in the Logs tab at the bottom of the macro canvas (see also next section) and in the Run Manager application on the Optilogic platform.

Please note that:

  • If a task is selected in the Macros tab on the left-hand side pane or is selected in the macro canvas by clicking on it, then clicking on the Run button will bring up following message:
  • . User can choose to run the whole macro the task is part of or just the task by itself.
  • Macros do not need to be complete to be run, it is good practice to run individual tasks and partially completed macros before completely building out a macro without testing it along the way.

Logs Tab

Next, we will cover the Logs tab at the bottom of the Macro Canvas where logs of macros that are running/have been run can be found:

When a macro has not yet been run, the Logs tab will contain a message with a Run button, which can also be used to kick off a macro run. When a macro is running or has been run, the log will look similar to the following:

  1. The pane at the bottom of the macro canvas is expanded and we are on the Logs tab.
  2. At the top of the log the name of the macro is listed. If user switches to a different macro in the macros tab or by clicking on a tab at the top of the macro canvas, the Logs tab will display the logs of that macro.
  3. The Total Run Time indicates how long the macro ran for (if completed) or has been running for so far (if still processing).
  4. In the Run Selection drop-down, users can switch between looking at the logs of the current macro run and any previous runs of this particular macro.
  5. The run summary indicates how many of the total number of tasks that were attempted to be run (the "All" number):
    1. Errored - did not run to completion.
    2. Are Blocked - if a task is dependent on preceding task(s), it is blocked until the preceding task(s) have completed successfully.
    3. Are Pending - awaiting to be run.
    4. Are Processing - are currently being executed.
    5. Have Completed - have finished running without any errors.
  6. The macro that was run (Customers from Shipments) has 2 tasks, Import Raw Shipments and Create Unique Customers. We see in these 2 log records that both completed successfully. The type of task, and when the task started and ended are listed too. Should error(s) have occurred, the last one recorded will be listed in the Last Error column.
  7. This grid and its columns can be customized by the user, see the Appendix for details.

The next screenshot shows the log of an earlier run of the same macro where the first task ended in an error:

  1. In the Run Selection drop-down, we have chosen to look at the logs of this macro when it was run on August 11, 2025 at 1:13PM.
  2. We also notice in the Run Selection drop-down that the icon to the left of the date & time of each run indicates the status of the run. For this particular one started at 11:04AM on September 3, 2025, the turning blue circle indicates that this run is still processing.
  3. For the run we are viewing the log for, the status bar indicates that 1 task errored.
  4. Looking at the records in the grid, we see that the first task has status errored and since the second task depends on the first one completing without problems, it was cancelled.
  5. For the first task the last error that was encountered during the execution of the task is listed in the Last Error column. Reading this may help a user pinpoint the problem, but if not, our dedicated support team can most likely help! Feel free to reach out to them on support@optilogic.com.

The progress of DataStar macro and task runs can also be monitored in the Run Manager application where runs can be cancelled if needed too:

  1. When logged into the Optilogic platform, click on the Run Manager icon in the list of available applications on the left-hand side to open it. Your Run Manager icon may be in a different location in the list, and if it is not visible at all, then click on the icon with 3 horizontal dots to show any applications that are not shown currently.
  2. This job shown in the second record of the Run Manager is that of the DataStar Macro run. Each Macro that is run will have a record for the overall macro and additional records for each task within the macro. This Macro Run job currently has status = Running. To cancel the run, right-clicking on the job and select Cancel Job from the menu that comes up.
  3. The job shown in the first record of the Run Manager is that of an individual Import task within the overall macro. It has status = cancelled as it had already been cancelled previously.

Please note that:

  • A log is recorded in the Logs tab while a macro is running, user can watch the real-time updates if the Logs tab is open.
  • No log is available for a macro that has not yet been run.

Data Connections Tab

In the Data Connections tab on the left-hand side pane the available data connections are listed:

  1. User has clicked on the Data Connections tab in the left-hand side pane of DataStar to make this the active tab.
  2. All Data Connections currently set up within DataStar are listed here. With the exception of the Project Sandbox, which is unique to each project, all other connections are accessible by all DataStar projects at the moment. Currently, there are 4 data connections available: the Project Sandbox, a CSV File connection named Historical Shipments, and 2 Cosmic Frog Models. Connections can be expanded to view their content (e.g. the tables/views contained in them) by clicking on the greater than sign to the left of the connection's name. See the next screenshot for what the connections list looks like when the connections are expanded.
  3. To quickly find specific connections and/or tables contained in them, user can type into the Search box. Connections and tables with the search text in their names will be filtered out and shown in the list. Please note that for the search to be performed on tables within a data connection, the data connection needs to be expanded (see previous bullet). If the option to not show empty tables is enabled (next bullet), then the search will also not search these empty tables and only return populated tables.
  4. Clicking on the filter icon will bring up 2 options for what is included when showing the contents of connections:
    1. Show Empty Tables - user can choose to show these in the tables list when the connection is expanded by leaving the checkbox checked (default) or, alternatively, uncheck this checkbox so that empty tables are hidden.
    2. Show Views - database connections can have views in them, which are named queries that run on 1 or multiple tables in the database. By default this checkbox is unchecked and views are not shown when a connection is expanded. However, users can choose to show them by checking this checkbox.

Next, we will have a look at what the connections list looks like when the connections have been expanded:

  1. The Project Sandbox connection, which is a Postgres database underneath, has been expanded:
    1. There are multiple schemas present in the Project Sandbox database; the one that contains the tables and will be shown when expanded is the Starburst schema.
    2. We see that there are 2 tables here which have been populated by running the Customers from Shipments macro: the rawshipments table is the result of the Import task ("Import Raw Shipments") in the macro; it has 42.66k records. The customers table is the result of running the Run SQL task ("Create Unique Customers") and this has resulted in 1.33k unique customers.
  2. The Historical Shipments connection is the CSV File data connection connected to the shipments.csv file which contains raw shipment data. Since it is a CSV File connection, it has 1 table in it, which has the same name as the csv file it is connected to (shipments).
  3. This Cosmic Frog model connection is named Cosmic Frog NA Model. Cosmic Frog models are also Postgres databases underneath with a specific schema that Optilogic's Cosmic Frog application uses for optimizations (including network and transportation), simulations (including inventory), and Greenfield runs.
    1. The schema used for the tables in a Cosmic Frog model is called anura_2_8 and this schema is expanded in the connection to view the tables.
    2. In this example, we have chosen not to show empty tables and we see the first 4 populated tables in the list.

Viewing a Connection's Table

The tables within a connection can be opened within DataStar. They are then displayed in the central part of DataStar where the Macro Canvas is showing when a macro is the active tab.

Please note: currently, a data preview of up to 10,000 records for a table is displayed for tables in DataStar. This means that any filtering or sorting done on tables larger than 10k records is done on this subset of 10k records. At the end of this section it is explained how datasets containing more than 10k records per table can be explored by using the SQL Editor application.

  1. A table is opened in the central part of DataStar by clicking on it in the connections list. Here user clicked on the rawshipments table to open it. The tabs across the top of the central part where the table is now displayed have the name of the table or macro that is open in that tab on them. Users can switch between tables and macros by clicking on the tabs. Currently, the rawshipments table and the customers table are both open, with the rawshipments table being shown since that is the active tab.
  2. At the moment, DataStar will show a preview of up to 10,000 records of any table. The total number of records in the table is also mentioned here. As mentioned above, this also means that any filtering or sorting is performed on this subset of up to 10k records.
  3. An additional menu named Table Functions is available in the toolbar when a table is open in DataStar's central part. Options from this menu are:
    1. Export to CSV - this will export the table to a csv file which will be accessible from user's Downloads area on the Optilogic platform (click on your username at the right top of the screen and select Downloads from the drop-down list). Note to check back in a few minutes if you do not see the download there immediately.
    2. Open in SQL Editor - for databases, this will open the database in the SQL Editor application on the Optilogic platform and show the table that was active in DataStar. A screenshot of a DataStar project sandbox database in SQL Editor and a link to a Help Center article on the SQL Editor application are included at the end of this section.
  4. This grid can be customized by the user (e.g. sort and change the order of columns), see the appendix on how to do this.
  5. Users can also filter the grid based on values in one or multiple columns. The next screenshot covers this in more detail.
  6. On the right-hand side, there are 2 panes available that will become visible when clicking on their names: 1) Columns: to configure which columns are shown in the grid and in which order, and 2) Filters: filters can also be configured from this fold out pane. Each of these are covered in a screenshot further below in this section. Once a pane has been opened, it can be closed again by clicking on its name on the right-hand side of the pane.

A table can be filtered based on values in one or multiple columns:

  1. A column that has been filtered can be recognized by the blue filter icon to the right of the column name. This filter icon is black when not filtering on this column. Clicking on the filter icon brings up a form where filters can be configured.
  2. Currently, the product name field is filtered for records where the product name contains the text "chair". This is non-case sensitive and all records that have "chair" in their name (whether at the start, at the end or somewhere in the middle) will be filtered out and shown. Please note that:
    1. Once user hits the Enter key after typing into the Filter... text box the filter is applied.
    2. To remove a filter, user needs to delete the text from the Filter... text box.
  3. A filter can consist of multiple parts and whether only records that satisfy all filter parts are shown or records that satisfy at least one of the parts are shown depends on the selection of "AND" vs "OR" here. When using AND, only records that satisfy all filter parts will be shown. When using OR, records that satisfy at least one of the filter parts will be shown.
  4. Besides filtering records for their values containing certain text (see bullet 2 above), there are additional options available as shown in this drop-down list. After selecting the desired option, user can type in the Filter... text box (not visible in the above screenshot as it is covered by the filter type drop-down list). The drop-down list shown in the above screenshot is for columns of string/text data type. Different options are available for columns containing numerical data, as shown here for the Units column:

Columns can be re-ordered and hidden/shown as described in the Appendix; this can be done using the Columns fold-out pane too:

  1. Click on Columns on the right-hand side of the table to open the Columns pane.
  2. To find the column(s) of interest quickly, user can type into the Search... text box to filter the list of columns down to those containing the typed text in their name.
  3. These checkboxes are used to hide/show columns in the grid: uncheck a column's checkbox to hide it. Note that the checkbox at the top of the list can be used to hide/show all columns with one click.
  4. The order of the columns in the grid can be changed by clicking on the icon with 4x3 dots, then hold the mouse down and drag the column up or down. Let go of the mouse once the column is in the desired position.

Finally, filters can also be configured from a fold-out pane:

  1. Click on Filters on the right-hand side of the table to open the Filters pane.
  2. To find the column(s) you want to filter on quickly, you can type into the Search... text box to filter the list of columns down to those containing the typed text in their name.
  3. Click on the greater than icon to the left of the column name that you want to apply the filter to so that it expands and the filter configuration for the column becomes visible. Configure the filter as covered above by selecting the filter type from the drop-down and typing the filter criterion into the Filter... text box.
  4. A column that has a filter applied to it already can be recognized in the list: it has a filter icon to the right of its column name whereas unfiltered columns have no such icon displayed.

Users can explore the complete dataset of connections with tables larger than 10k records in other applications on the Optilogic platform, depending on the type of connection:

  • Lightning Editor: for CSV files
  • SQL Editor: for Postgres DB connections, which includes the Project Sandbox and Cosmic Frog models. See this SQL Editor Overview help center article on how to use the SQL Editor. For the Project Sandbox, please note that:
    • The name of the Project Sandbox database is the same as the project name
    • The tables that are created in the sandbox can be found under the “starburst” schema

Here is how to find the database and table(s) of interest on SQL Editor:

  1. When logged into the Optilogic platform, click on the SQL Editor icon in the list of available applications on the left-hand side to open it. Your SQL Editor icon may be in a different location in the list, and if it is not visible at all, then click on the icon with 3 horizontal dots to show any applications that are not shown currently.
  2. Either use the Search box at the top to find your database of interest or scroll through the list. DataStar project sandbox databases can be recognized by the DataStar logo to the left of the database name. The name of a DataStar project sandbox database is that of the DataStar project it belongs, in our example "Import Historical Shipments".
  3. When expanding the database, the starburst schema which contains the tables contained in the data connection will by default be expanded too.
  4. We see the customers and raswhipments tables that were the result of running the Customers from Shipments macro. Clicking on a table will run a query to show the first 20 records of that table.

Helpful Resources

Here are a few additional links that may be helpful:

We hope you are as excited about starting to work with DataStar as we are! Please stay tuned for regular updates to both DataStar and all the accompanying documentation. As always, for any questions or feedback, feel free to contact our support team at support@optilogic.com.

Appendix - Customizing Grids

The grids used in DataStar can be customized and we will cover the options available through the screenshot below. This screenshot is of the list of CSV files in user's Optilogic account when creating a new CSV File connection. The same grid options are available on the grid in the Logs tab and when viewing tables that are part of any Data Connections in the central part of DataStar.

  1. The columns in the grid can be dragged to change the order of the columns, and they can also be resized by clicking on the vertical bar in between the columns (the mouse then changes to 2 arrows pointing away from each other), holding the mouse down and moving right or left. Double-clicking while hovering over the vertical bar (mouse has changed to 2 arrows pointing away from each other) will autosize the column to fit the longest value.
  2. The grid can be sorted by the values of a column by clicking on its column name; this will sort the column in ascending order. Clicking on the column name will change the sort to be in descending order and clicking a third time takes the sort off the column. Sorting by multiple columns is possible too: sort the first column as desired, then hold down the Ctrl and Shift keys while clicking on the name(s) of a second, third, etc. column to add them to the multi-sort. Numbers indicate the order of the sort. Here, the grid was first sorted by the Location column and then by File Name.
  3. Clicking on the icon with 3 vertical dots to the right of a column name will bring up a context menu with the following options:
    1. Sort Ascending / Sort Descending / Clear Sort: depending on if the column is sorted and if so, how, 2 of these 3 options will be listed for each column to quickly change / take off the sort on this column.
    2. Pin Column: columns can be put in a fixed position that will stay visible when scrolling. Options are to pin the column all the way to the left or all the way to the right of the grid.
    3. Autosize This column: change the width of the column to fit the longest value.
    4. Autosize All Columns: change the width of all columns in the grid to fit their longest values.
    5. Choose Columns: brings up the list of columns present in the grid with the options to 1) hide them by unchecking their checkboxes and / or 2) change the column order by dragging columns to different positions in the list.
    6. Reset Columns: unhides all columns if any are hidden and puts them in their original order.

Appendix - Leapfrog Features & Limitations

Current Version Features:

  • Text to SQL generation capabilities through the “Run SQL” task. Currently, the following types of SQL queries are supported:
    • Insert
    • Update
    • Delete
    • Create Table
    • Alter Table
    • Union
  • Supported data connections: Project Sandbox (Starburst schema) relevant to the DataStar project.
  • Output formats - each prompt response typically contains:
    • Description on what Leapfrog creates in the Documentation part
    • Run SQL task
    • Options to Run Task (not yet functional) or Add to Macro
  • Multi-turn conversation: Leapfrog has a ‘memory’ within each conversation. This means user can reference a previous prompt or response in a subsequent request.
  • Conversation history is kept and user can go back to previous conversations to for example add any Run SQL tasks to a macro that were not added yet or continue a conversation from where it was left off.
  • Multi-language support: users can submit their prompt in languages other than English, and the Documentation part of the response will be in the same language.
  • Leapfrog knows the Anura schema, which Cosmic Frog models use, and hereby facilitates the creation of tables with the same schema. These can next be exported to Cosmic Frog models (Export task coming soon!) and used without further changes/updates needed before using the model in Cosmic Frog.

Current Limitations

  • All prompts will result in a SQL task in this initial Early Adopter release. In the future, Leapfrog will try and suggest a no-code task unless you include keywords like 'write,' 'generate,' 'create,' or 'give me’ SQL (task) in your prompt.
  • Each task created from Leapfrog can only use 1 connection at a time, and this connection is the Project Sandbox (Starburst). Moving data from one connection to another is not yet supported.
  • Leapfrog in DataStar cannot answer questions about its own capabilities yet.

Appendix - Leapfrog Data Usage & Security

Training Data

Leapfrog's brainpower comes from:

  • Optilogic's Anura schema
  • Hundreds of handcrafted SQL examples from real supply chain experts

All training processes are owned and managed by Optilogic — no outside data is used.

Using Leapfrog

When you ask Leapfrog a question:

  • It securely accesses your data through an API.
  • Your data stays yours — no external sharing or external training.

Conversation History

Your conversations (prompts, answers, feedback) are stored securely at the user level.

  • Only you and authorized Optilogic personnel can view your history.
  • Other users cannot access your data.

Privacy and Ownership

  • You retain full ownership of your model data. Only authorized Optilogic personnel can access it.
  • Optilogic uses industry-standard security protocols to keep everything safe and sound.

Getting Started with DataStar (Early Adopter Phase)

DataStar is Optilogic’s new AI-powered data product designed to help supply chain teams build and update models & scenarios and power apps faster & easier than ever before. It enables users to create flexible, accessible, and repeatable workflows with zero learning curve—combining drag-and-drop simplicity, natural language AI, and deep supply chain context.

Today, up to an estimated 80% of a modeler's time is spent on data—connecting, cleaning, transforming, validating, and integrating it to build or refresh models. DataStar shrinks that time by up to 50%, enabling teams to:

  • Answer more questions faster
  • Unlock repeatable value across business review
  • Focus on strategic decisions, not data wrangling

The 2 main goals of DataStar are 1) ease of use, and 2) effortless collaboration, these are achieved by:

  • Providing AI-powered, no-code automation with deep supply chain context
  • Supporting drag-and-drop workflows, natural language commands, and advanced scripting (SQL/Python)
  • Full integration into the Optilogic platform: users can prep data, trigger model & scenario runs, and push insights to apps or dashboards
  • Enabling scalable, collaborative, cloud-native modeling for repeatable decision-making at speed

DataStar is currently in the Early Adopter (EA) phase and is rapidly evolving while we work towards a General Availability release later this year. Therefore, this documentation will be regularly updated as new functionality becomes available. If you are interested in learning more about DataStar or the Early Adopter program, please contact the Optilogic support team at support@optilogic.com.

In this documentation, we will start with a high-level overview of the DataStar building blocks. Next, creating projects and data connections will be covered before diving into the details of adding tasks and chaining them together into macros, which can then be run to accomplish the data goals of your project.

DataStar Overview

Before diving into more details in later sections, this section will describe the main building blocks of DataStar, which include Data Connections, Projects, Macros, and Tasks.

As DataStar is currently in the Early Adopter phase, this document will be updated regularly as more features become available. In this section, references to future capabilities which are not yet released are included in order to paint the larger picture of how DataStar will work. In the text it is made clear which parts are and which are not yet available in the first Early Adopter release.

Data Connections

Since DataStar is all about working with data, Data Connections are an important part of DataStar. These enable users to quickly connect to and pull in data from a range of data sources. Data Connections in DataStar:

  • Are global to the DataStar application – meaning each project within DataStar can use any of the data sources that have been set up as Data Connections.
  • Can also be set up from within a DataStar project – they then become available for use in other DataStar projects too.
  • Can be of the following types (the last 6 indicated with an * are not yet available in the Early Adopter program):
    • Postgres – an open-source relational database management system that supports both SQL and JSON querying
    • CSV Files – files containing data in the comma separated values format, which can be created by and opened in Excel
    • Cosmic Frog Models – a Cosmic Frog model which is a Postgres database using a specific data schema called Anura. Often the projects in DataStar will populate Cosmic Frog model input tables to build complete models that are ready to be run by one of the Cosmic Frog engines and/or read in Cosmic Frog output tables for output analysis
    • Excel* – spreadsheet editor developed by Microsoft for Windows
    • MySQL* – an open-source relational database management system that supports SQL querying
    • SQLite* – an open-source relational database engine used as a library in applications
    • OneDrive* – cloud storage server provided by Microsoft
    • ODBC Connection* – a standard way for applications to connect to various databases. This means that if your data source is not one of the types listed here, you may still be able to connect to it if the target database has a specific ODBC driver available
    • Snowflake* - a cloud-based data platform that provides a data warehouse as a service (DWaaS)

Projects, Macros, and Tasks

Projects are the main container of work within DataStar. Typically, a Project will aim to achieve a certain goal by performing all or a subset of importing specific data, then cleansing, transforming & blending it, and finally publishing the results to another file/database. The scope of DataStar Projects can vary greatly, think for example of following 2 examples:

  • Cleanse and filter a specific set of historical supply chain data.
  • Build a Cosmic Frog model from scratch using the raw data from the data sources available in DataStar’s Data Connections, then run the model, analyze its outputs, and finally generate reports at the desired level of aggregation.

Projects consist of one or multiple macros which in turn consist of 1 or multiple macros and/or tasks. Tasks are the individual actions or steps which can be chained together within a macro to accomplish a specific goal. In future, multiple macros can also be chained together in another macro in order to run a larger process. Tasks are split into the following 3 categories in DataStar:

  • Transform – using these tasks user can convert the data from their raw state in the Data Connections to the clean and predefined format they desire. These tasks will include those that can import, export, select, group, delete, update, pivot and unpivot data. For the Early Adoptor program, the import transform task is initially available.
  • Execute & Automate – these tasks aim to make users as productive as possible by allowing them to run Cosmic Frog models, SQL or Python code, and other Macros as part of a Macro. Notifications can also be sent to alert users that a certain Macro has completed. Just the Run SQL task is available in the first version for the Early Adopter program.
  • AI Agents – in future updates of DataStar, these tasks will be able to perform common tasks using artificial intelligence. Think of automatically comparing scenario outputs or filling out missing data in input tables.

The next screenshot shows an example Macro called Shipments which consists of 7 individual tasks that are chained together to create transportation policies for a Cosmic Frog model from imported Shipments and Costs data. As a last step, it also runs the model with the updated transportation policies:

Note that not all tasks to build a macro like this are yet available in the current Early Adopter version of DataStar.

Project Sandbox

Every project by default contains a Data Connection named Project Sandbox. This data connection is not global to all DataStar projects; it is specific to the project it is part of. The Project Sandbox is a Postgres database where users generally import the raw data from the other data connections into, perform transformations in, save intermediate states of data in, and then publish the results out to a Cosmic Frog model (which is a data connection different than the Project Sandbox connection). It is also possible that some of the data in the Project Sandbox is the final result/deliverable of the DataStar Project or that the results are published into a different type of file or system that is set up as a data connection rather than into a Cosmic Frog model.

How Data Connections, Projects, and Macros Relate to Each Other

The next diagram shows how Data Connections, Projects, and Macros relate to each other in DataStar:

  1. In this example, there are 7 Data Connections configured in DataStar, see the rectangle with green background on the left:
    1. A OneDrive connection called Historical Data (OneDrive connections are not yet available in the current Early Adopter DataStar version)
    2. A Snowflake connection called Enterprise Data (OneDrive connections are not yet available in the current Early Adopter DataStar version)
    3. A Postgres connection called Location Data
    4. A CSV connection called Cost Data
    5. A CSV connection called Capacity Data
    6. A Cosmic Frog connection called Neo NA Model
    7. A Cosmic Frog connection called Global Model
  2. Note that the 2 Cosmic Frog connections displayed here on the right-hand side are the same 2 as shown in the list on the left, they are just repeated in the diagram to facilitate explaining the flow of data.
  3. There are 2 projects set up in DataStar, see the 2 rectangles with blue background in the middle:
    1. Project 1 creates Policies tables for the Cosmic Frog model named Neo NA Model, a network optimization model in the Northern Americas geography.
    2. Project 2 builds, runs, and analyzes a complete Cosmic Frog model named Global Model from raw data.
  4. Looking at Project 1, we see that:
    1. It uses 3 of the 7 Data Connections available (blue arrows):
      1. Two to pull data in from: the Historical Data, and Cost Data connections.
      2. One to publish data into: the Neo NA Model.
    2. It has its own Project Sandbox as an additional Data Connection which is specific to this project only.
    3. It contains 3 macros: Shipments, Production, and Inventory. The Shipments macro can look similar to the example one seen in the previous screenshot.
    4. The 3 macros pull data from the Historical Data, Cost Data, and Project Sandbox connections.
    5. The 3 macros publish data into the Project Sandbox and the Neo NA model connections. The completed Transportation Policies, Production Policies, and Inventory Policies tables are published into the Cosmic Frog model.
  5. Similarly, looking at Project 2, we follow the yellow arrows to understand which Data Connections are used to pull data from and publish data into. Note that the Global Model connection is used to publish results into by the “Publish to Model” macro which populates the model’s input tables and it is also used as a connection to pull data from for the “Output Analysis” macro after the model has run to completion.

Early Adopter Development Note

For the remainder of this document, only current Early Adopter DataStar functionality is shown in the screenshots (with a few exceptions, which will be noted in the text). The text mostly just covers current functionality and will at times reference features which will be included in future DataStar versions. Within DataStar, users may notice buttons, options in drop-down and right-click menus that have been disabled (greyed out or cannot be clicked on), since new functionality is being worked on continuously. These will be enabled over time and other new features will also gradually be added.

Creating Projects & Data Connections

On the start page of DataStar user will be shown the existing projects and data connections. They can be opened, or deleted here, and users also have the ability to create new projects and data connections on this start page.

The next screenshot shows the existing projects in card format:

  1. When logged into the Optilogic platform, click on the DataStar icon in the list of available applications on the left-hand side to open DataStar. Your DataStar icon may be in a different location in the list, and if it is not visible at all, then click on the icon with 3 horizontal dots to show any applications that are not shown currently.
  2. We are on the Projects tab of the start page in the DataStar application.
  3. The projects are shown in card format (the left icon); the other option is to show them as a list (the right icon).
  4. When hovering over a project, the options to edit the project (rename it and/or update its description) and to delete the project become visible. When clicking on the delete project icon, a message asking user to confirm they want to delete the project comes up before actually deleting it.
  5. Users can quickly search the list of projects by typing in the Search text box and projects containing the text will be filtered out.

New projects can be created by clicking on the Create Project button in the toolbar at the top of the DataStar application:

  1. User clicked on the Create Project button which opened the Create Project form.
  2. Here, a Project Name can be entered.
  3. Optionally, user can write a Project Description.
  4. Under Project Type, user can currently just create a new Empty Project.
  5. Click on the Edit button to change the project’s appearance by choosing an icon and color.
  6. Click on the Add Project button to create the project.
  7. Note that on the right-hand side, Help for the currently open DataStar form is shown.

The next screenshot shows the Data Connections that have already been set up in DataStar in list view:

  1. We are on the Data Connections tab of the start page in the DataStar application.
  2. The Data Connections are shown in list format (right icon); the other option is to show them in card format (left icon) similar to the screenshot above of the Projects in card format.
  3. For each Data Connection we see the following details in the list: Name, Connection Type, Description, Created At, Owner, Last Edited, and Actions. Clicking on the column headers sorts the table by that column in ascending order, clicking again will sort in descending order, and clicking a third time takes the sort of. Holding both the Shift and Ctrl buttons down and next clicking on multiple column headers will sort the table by those multiple columns.
    1. Note that when hovering over the Actions field in a data connection row, icons to rename and delete the connection become visible. When users click on the delete icon, a message asking the user to confirm they want to delete the data connection comes up before actually deleting it.
  4. Users can quickly search the list of data connections by typing in the Search text box and connections containing the text will be filtered out.

New data connections can be created by clicking on the Create Data Connection button in the toolbar at the top of the DataStar application:

  1. The Create Data Connection form has been opened by clicking on the Create Data Connection button.
  2. First, a Data Connection Name needs to be entered.
  3. Optionally, user can write a Connection Description.
  4. The type of connection can be chosen from the Connection Type drop-down list. See the “Data Connections” section further above for a full list of connection types and a short description of each.

The remainder of the Create Data Connection form will change depending on the type of connection that was chosen as different types of connections require different inputs (e.g. host, port, server, schema, etc.). In our example, the user chooses CSV Files as the connection type:

  1. The Connection Type is now showing CSV Files per the selection user made.
  2. There are 2 options to select the CSV source file:
    1. The CSV file to be used for the Data Connection can be dragged and dropped onto this “Drag and drop” area from user’s computer. It will then be uploaded to the user’s /MyFiles/DataStar folder on the Optilogic platform. In case a file of the same name already exists in that location, it will be overwritten.
    2. User can browse the list of CSV files that exist in their Optilogic account already (not limited to files under /MyFiles/DataStar, will show all CSV files in their account) to select one as the source for the data connection. Note that:
      1. We can quickly find files of interest by typing in the Search box at the top of the list to filter out any files containing the typed text in their names.
      2. In case not all 3 columns shown in the screenshot are visible, users can scroll right to also see the location in user’s Optilogic workspace of the file.
      3. Options to customize the grid to users' needs (e.g. sorting, changing the order of columns, etc.) are explained in the Appendix.
  3. After selecting the CSV file to be used for the Data Connection, users can click on the Add Connection button to create the new data connection.

In our walk-through here, the user drags and drops a Shipments.csv file from their local computer on top of the Drag and drop area:

  1. User dragged and dropped their local Shipments.csv file in the “Drag and drop” area.
  2. Once the upload of the file is finished, a message in green font indicating the upload completed successfully is shown.
  3. The Shipments.csv file is now listed in the list of CSV files the user has available in their Optilogic account. As expected, the location of this file is /MyFiles/DataStar. Click on the file in the list to select it.
  4. User can then click on the Add Data Connection button to create the connection.

Inside a DataStar Project

Now let us look at a project when it is open in DataStar. We will first get a lay of the land with a high-level overview screenshot and then go into more detail for the different parts of the DataStar user interface:

  1. At the top of the DataStar application, users will find a toolbar:
    1. Clicking on the icon all the way to the left will take user back to DataStar’s start page where the lists of existing projects and data connections are shown, see also the previous section “Creating Projects & Data Connections”.
    2. The left part of the toolbar contains from left to right:
      1. Create Macro button: click on this button to create a new macro.
      2. Data Connections drop-down menu: options in the menu are to create a new data connection and, in future, to upload data.
      3. Manage Variables button: in future, this button will be enabled so users can pass in values that can be used/updated in their macros.
    3. The right part of the toolbar gives users quick options to access Leapfrog AI and to run macros.
  2. In the pane on the left-hand side of the application, either the list of Macros that the project contains (left tab) or the list of available Data Connections (right tab) is shown. In this screenshot, the Macros tab is the active tab.
    1. Macros can be expanded/collapsed; when expanded you see a list of all the tasks/macros that make up the macro. This will be shown in more detail below.
    2. Likewise, data connections can also be expanded/collapsed; when expanded you see the available schemas for database connections and (for all connection types) the tables contained in the data connection.
  3. In the pane on the right-hand side of the application, there are 3 tabs, from left to right:
    1. Tasks – here tasks from the Transform, Execute & Automate, and AI Agents categories can be chosen and dragged and dropped onto the Macro Canvas (the central part of the DataStar application) to add them to the currently active macro. Currently, Import (Transform category) and Run SQL (Execute & Automate category) tasks are available, more tasks will be gradually added.
    2. Configuration – the specific configuration parameters for the currently selected task can be set or updated here.
    3. Leapfrog – start or continue a conversation with Leapfrog here. Use natural language prompts, and Leapfrog will configure tasks for you!
  4. The central part of DataStar is called the Macro Canvas. Tasks can be dragged and dropped onto here and then connected to each other to build out a macro that will accomplish a specific data process. The macro canvas becomes active when user clicks on a macro or one of its tasks in the Macros tab on the left. The macro name is also listed in the tab at the top of the canvas.
  5. Tables present in any of the Data Connections can also be shown in the central part of DataStar by clicking on them in the Data Connections tab. This shows as an additional tab across the top of the macro canvas. Multiple macros and tables can be opened here at the same time, and users can switch between them by clicking on the tab of the macro/table they wish to show.
  6. At the bottom of the Macro Canvas, 2 tabs are showing:
    1. Logs – here it is tracked which task was run when and if it completed successfully.
    2. Task Results – this will show the resulting table of the currently selected task; this functionality is not yet included in the Early Adopter release.
  7. The 3 panes on the left-hand side, right-hand side, and to the bottom of the Macro Canvas can all be collapsed and expanded as desired. This can be done by clicking on the icons with the 2 greater than/less than signs, or 2 arrowheads pointing up/down.

Macros Tab

Next, we will dive a bit deeper into a macro:

  1. The macro named “Customers from Shipments” is selected on the Macros tab on the left-hand side panel of DataStar. Clicking on a macro in the Macros tab will also open it in the macro canvas.
  2. The macro has been expanded, so we see the list of tasks that are part of this macro. Users will note that:
    1. By default, each macro has a task named Start, which has its own specific icon and blue color. This task cannot be removed or renamed and the first actual task of the macro will be connected to it.
    2. Tasks from the Transform category have light blue icons associated with them, and those from the Automate & Execute category are green. The icon itself also indicates the type of task it is. For example, the “Import Raw Shipments” task is an Import task from the Transform category, and the “Create Unique Customers” task is a Run SQL task from the Automate & Execute category.
    3. Right-clicking on a Macro or a Task will bring up a context menu which can be used to Rename or Delete the Macro or Task.
  3. Use the Search text box to quickly find a macro/task whose name contains the typed text.
  4. This button can be used to expand or collapse all macros with one click.
  5. Click on the Create Macro button in the toolbar to add a new Macro to the project.

Macro Canvas

The Macro Canvas for the Customers from Shipments macro is shown in the following screenshot (note that the Export task shown is not yet available in the Early Adopter release):

  1. The tab tells us which macro we are looking at. Note that multiple macros can be opened here in multiple tabs and users can easily switch between them by clicking on the tab of the desired macro.
  2. The canvas currently shows 3 of the tasks that are part of the Customers from Shipments macro. The bottom part of a task contains the name and the top colored part of a task shows what type of task it is. For example:
    1. The task at the top connected to Start is an Import task from the light blue Transform category; its name is “Import Raw Shipments”.
    2. The task at the bottom left is a Run QSL task from the green Execute & Automate category; its name is “Create Unique Customers”.
  3. Tasks can be dragged and dropped onto the canvas from the Tasks list in the right-hand side pane. Once on the canvas, users can connect tasks by clicking in the middle of the right edge of the first task, holding the mouse down, and then clicking in the middle of the left edge of the next task. Please note that:
    1. DataStar helps users by showing a bigger circle when hovering over the middle of a left or right edge of a task.
    2. Tasks can be connected to multiple other tasks. If there are for example 2 tasks connected to a third task that succeeds the first 2, then this third task will not execute until both preceding tasks have completed.
    3. To delete a line that connects 2 tasks: click on the line (it will then become a dotted orange line), and then hit the Delete or Backspace key. Alternatively, right-click on the line and select Delete from the context menu that comes up.
  4. In the left bottom corner of the canvas users have access to the following controls, from top to bottom:
    1. Zoom in: clicking on this plus icon will increase the size of the tasks on the canvas, less of the total macro will be visible.
    2. Zoom out: clicking on this minus icon will decrease the size of the tasks on the canvas, more of the total macro will be visible.
    3. Fit view: clicking on the icon with 4 square corners will set the position and zoom-level of the canvas such that all tasks/macros that are part of the macro will be shown on the canvas, using up as much of the canvas space as possible.
    4. Toggle interactivity: not currently used.
  5. The grey rectangle at the right bottom of the canvas shows a small diagram of where all the tasks that are part of the macro are positioned. The smaller white rectangle within this grey rectangle indicates which part of the entire macro the canvas is showing currently. This is helpful when you have a macro with many tasks and you want to pan through it while it is zoomed in.

In addition to the above, please note following regarding the Macro Canvas:

  1. Clicking on a task in the canvas does several things:
    1. Selects the task (i.e. highlights it) in the macro(s) it is part of in the Macros tab on the left-hand side pane.
    2. Opens the Configuration of the task in the right-hand side pane.
    3. In a future update, it will also show the results of the most recent time the task was run in the Task Results tab in the pane at the bottom of the Macro Canvas.
  2. Hovering over a task will make Run Task and Delete icons visible:
  1. . When clicking on the left Run Task icon, the task is run by itself immediately. Its progress can be monitored in the Logs tab at the bottom of the macro canvas and in the Run Manager application on the Optilogic platform. After clicking on the Delete Task icon on the right, a confirmation message to ensure the user wants to delete the task will come up first before the task is removed.
  2. Users can position the canvas as they desire by clicking on it, holding the mouse down and then moving the mouse to drag the canvas in any direction.
  3. Users can also zoom in/out on the canvas by using the mouse or 2 fingers on a trackpad (move closer to each other to zoom out and move further apart to zoom in).

Tasks Tab

We will move on to covering the 3 tabs on the right-hand side pane, starting with the Tasks tab:

  1. We are on the Tasks tab on the right-hand side pane in DataStar.
  2. As previously discussed, there are initially 2 task categories:
    1. Transform – these tasks can be used to perform often used data actions and currently just includes the Import task (Export coming soon!).
    2. Execute & Automate – these tasks aim to make users more productive by incorporating automation and currently only includes the Run SQL task.

Users can click on a task in the tasks list and then drag and drop it onto the macro canvas to incorporate it into a macro.

Configuration Tab of a Task

When adding a new task, it needs to be configured, which can be done on the Configuration tab. When a task is newly dropped onto the Macro Canvas its Configuration tab is automatically opened on the right-hand side pane. To make the configuration tab of an already existing task active, click on the task in the Macros tab on the left-hand side pane or click on the task in the Macro Canvas. The configuration options will differ by type of task, here the Configuration tab of an Import task is shown as an example:

  1. We are on the Configuration tab on the right-hand side pane in DataStar.
  2. The description of the type of task that was selected is shown here, in this case of the Import task in the Transform category.
  3. The name of the task can be entered here.
    1. Once the task name has been saved, it is also listed at the top of the configuration form.
  4. The Data Connection section needs to be configured.
    1. For each section within a task configuration, there is an indicator telling user the status of this section of the configuration. Here the green check mark indicates the Data Connection section of the task configuration has been completed. When this icon is orange, it means the configuration is not yet finished.
    2. Sections within a configuration can be expanded/collapsed by clicking on the down/up caret icon.
  5. Within the Data Connection configuration section, first the Source is specified:
    1. Select the connection that will function as the source for the import task from the drop-down list containing the data connections set up in the project. Cosmic Frog models and CSV File connections can be used as the source for an Import task.
    2. For data connections with multiple tables (such as a Cosmic Frog model), users can select the table to use as the source from the drop-down list, which also shows how many records each table contains. In our example here, we are using the Shipments connection, which is a CSV file, so the 1 table in this file is used, and users do not need to select anything from the drop-down list.
    3. If a new data connection that is not yet part of the project is to be used as the source, users can click on the plus icon to add a new Data Connection.
  6. Next, the Destination of the import task is configured:
    1. Select the connection that will function as the destination for the import task from the drop-down list. This list will contain the Postgres data connections (including the Project Sandbox and Cosmic Frog models) which are set up in the project. Oftentimes, the Project Sandbox will be the destination connection for Import tasks as the imported data will almost always still need to be cleansed, validated, and blended before reaching its final state.
    2. Enter the name of the new table to be created in the destination data connection.
    3. If a new data connection that is not yet part of the project is to be used as the destination, user can click on the plus icon to add a new Data Connection.

Please note that:

  • The table name is set to RawShipments in the configuration, and it will be imported to the Project Sandbox as a table named rawshipments, so the name is converted to all lowercase.
  • If there are spaces in the column names in the CSV file, these will be replaced by underscores when importing into the Project Sandbox. Special characters like parentheses in column names are removed. For example a column named Distance (MI) is imported as distance_mi.

Leapfrog Tab

Leapfrog in DataStar (aka D*AI) is an AI-powered feature that transforms natural language requests into executable DataStar tasks. Users can describe what they want to accomplish in plain language, and Leapfrog automatically generates the corresponding task or SQL query without requiring technical coding skills or manual inputs for task details. This capability enables both technical and non-technical users to efficiently manipulate data, build Cosmic Frog models, and extract insights through conversational interactions with DataStar.

Note that there are 2 appendices at the end of this documentation where 1) details around Leapfrog in DataStar's current features & limitations are covered and 2) Leapfrog's data usage and security policies are summarized.

  1. Leapfrog can be accessed by clicking on the “How can I help you” text bubble or the frog icon in the toolbar at the top of DataStar, or by clicking on the Leapfrog tab on the right-hand side pane.
  2. User can type a prompt into the “Write a message…” free type text box. Here user is asking to create unique customers from the destination stores that are present in the rawshipments table, which was imported into the Project Sandbox. Extra instructions to average the latitude and longitude if there are multiple records for the same destination store are given in order to calculate a latitude and longitude for each customer.
  3. Hit enter or click on the blue Send icon on the right to submit the prompt.
  4. There is a "Conversations" pane on the left when the Leapfrog tab is active. This pane can be expanded by clicking on the icon with the 2 greater than signs. Previous Leapfrog conversations will then be shown in the pane, so user can go back to these. This pane will be discussed in more detail further below.

Leapfrog’s response to this prompt is as follows:

  1. The prompt submitted by the user is listed at the top.
  2. Leapfrog first describes in natural language what it has done in response to the prompt.
  3. It is creating a Run SQL task named “Create customers table” as the response to the prompt.
  4. The Data Connection section lists that the target connection is the Project Sandbox.
  5. In the SQL Script section, the SQL query that will be executed if adding this task as a Run SQL task to a macro is shown.
    1. User can click on this expand icon to show the SQL Query in a bigger Code Editor window. The complete SQL Query reads:
DROP TABLE IF Exists customers;
CREATE TABLE customers AS 
SELECT destination_store AS customer, AVG(destination_latitude) AS latitude, AVG(destination_longitude) AS longitude FROM rawshipments 
GROUP BY destination_store
  1. Clicking on the “Add to Macro” button will add a Run SQL task named “Create customers table” with this configuration to the Macro Canvas.

Finally, we will have a look at the Conversations pane:

  1. The icon with 2 greater than signs was clicked on to open the Conversations pane, which is now visible on the left. The icon has changed to 2 less than signs which can be used to collapse this pane again.
  2. Previous Leapfrog conversations are listed in the Conversations list. Clicking on a conversation will open it on the right hand-side. Users can review the previous prompts and responses, decide to add any Run SQL tasks Leapfrog generated to their macro if they were not added before, or continue the conversation where it was left off.
  3. When hovering over a conversation in the list, 2 icons become visible. These can be used to 1) rename the conversation (by default its name is the text of the first prompt in the conversation) and 2) to delete the conversation (user will get a confirmation message before the conversation is deleted).
  4. The "+ New Conversation" button can be used to start a new blank conversation.

Within a Leapfrog conversation, Leapfrog remembers the prompts and responses thus far. User can therefore build upon previous questions, for example by following up with a prompt along the lines of “Like that, but instead of using a cutoff date of August 10, 2025, use September 24, 2025”.

Additional helpful Leapfrog in DataStar links:

Running a Macro

Users can run a Macro by selecting it and then clicking on the green Run button at the right top of the DataStar application:

  1. The “Customers from Shipments” macro is open and is also selected in the Macros tab on the left-hand side pane (not shown).
  2. The green Run button is enabled and clicking this will immediately kick off the macro run. Its progress can be monitored in the Logs tab at the bottom of the macro canvas (see also next section) and in the Run Manager application on the Optilogic platform.

Please note that:

  • If a task is selected in the Macros tab on the left-hand side pane or is selected in the macro canvas by clicking on it, then clicking on the Run button will bring up following message:
  • . User can choose to run the whole macro the task is part of or just the task by itself.
  • Macros do not need to be complete to be run, it is good practice to run individual tasks and partially completed macros before completely building out a macro without testing it along the way.

Logs Tab

Next, we will cover the Logs tab at the bottom of the Macro Canvas where logs of macros that are running/have been run can be found:

When a macro has not yet been run, the Logs tab will contain a message with a Run button, which can also be used to kick off a macro run. When a macro is running or has been run, the log will look similar to the following:

  1. The pane at the bottom of the macro canvas is expanded and we are on the Logs tab.
  2. At the top of the log the name of the macro is listed. If user switches to a different macro in the macros tab or by clicking on a tab at the top of the macro canvas, the Logs tab will display the logs of that macro.
  3. The Total Run Time indicates how long the macro ran for (if completed) or has been running for so far (if still processing).
  4. In the Run Selection drop-down, users can switch between looking at the logs of the current macro run and any previous runs of this particular macro.
  5. The run summary indicates how many of the total number of tasks that were attempted to be run (the "All" number):
    1. Errored - did not run to completion.
    2. Are Blocked - if a task is dependent on preceding task(s), it is blocked until the preceding task(s) have completed successfully.
    3. Are Pending - awaiting to be run.
    4. Are Processing - are currently being executed.
    5. Have Completed - have finished running without any errors.
  6. The macro that was run (Customers from Shipments) has 2 tasks, Import Raw Shipments and Create Unique Customers. We see in these 2 log records that both completed successfully. The type of task, and when the task started and ended are listed too. Should error(s) have occurred, the last one recorded will be listed in the Last Error column.
  7. This grid and its columns can be customized by the user, see the Appendix for details.

The next screenshot shows the log of an earlier run of the same macro where the first task ended in an error:

  1. In the Run Selection drop-down, we have chosen to look at the logs of this macro when it was run on August 11, 2025 at 1:13PM.
  2. We also notice in the Run Selection drop-down that the icon to the left of the date & time of each run indicates the status of the run. For this particular one started at 11:04AM on September 3, 2025, the turning blue circle indicates that this run is still processing.
  3. For the run we are viewing the log for, the status bar indicates that 1 task errored.
  4. Looking at the records in the grid, we see that the first task has status errored and since the second task depends on the first one completing without problems, it was cancelled.
  5. For the first task the last error that was encountered during the execution of the task is listed in the Last Error column. Reading this may help a user pinpoint the problem, but if not, our dedicated support team can most likely help! Feel free to reach out to them on support@optilogic.com.

The progress of DataStar macro and task runs can also be monitored in the Run Manager application where runs can be cancelled if needed too:

  1. When logged into the Optilogic platform, click on the Run Manager icon in the list of available applications on the left-hand side to open it. Your Run Manager icon may be in a different location in the list, and if it is not visible at all, then click on the icon with 3 horizontal dots to show any applications that are not shown currently.
  2. This job shown in the second record of the Run Manager is that of the DataStar Macro run. Each Macro that is run will have a record for the overall macro and additional records for each task within the macro. This Macro Run job currently has status = Running. To cancel the run, right-clicking on the job and select Cancel Job from the menu that comes up.
  3. The job shown in the first record of the Run Manager is that of an individual Import task within the overall macro. It has status = cancelled as it had already been cancelled previously.

Please note that:

  • A log is recorded in the Logs tab while a macro is running, user can watch the real-time updates if the Logs tab is open.
  • No log is available for a macro that has not yet been run.

Data Connections Tab

In the Data Connections tab on the left-hand side pane the available data connections are listed:

  1. User has clicked on the Data Connections tab in the left-hand side pane of DataStar to make this the active tab.
  2. All Data Connections currently set up within DataStar are listed here. With the exception of the Project Sandbox, which is unique to each project, all other connections are accessible by all DataStar projects at the moment. Currently, there are 4 data connections available: the Project Sandbox, a CSV File connection named Historical Shipments, and 2 Cosmic Frog Models. Connections can be expanded to view their content (e.g. the tables/views contained in them) by clicking on the greater than sign to the left of the connection's name. See the next screenshot for what the connections list looks like when the connections are expanded.
  3. To quickly find specific connections and/or tables contained in them, user can type into the Search box. Connections and tables with the search text in their names will be filtered out and shown in the list. Please note that for the search to be performed on tables within a data connection, the data connection needs to be expanded (see previous bullet). If the option to not show empty tables is enabled (next bullet), then the search will also not search these empty tables and only return populated tables.
  4. Clicking on the filter icon will bring up 2 options for what is included when showing the contents of connections:
    1. Show Empty Tables - user can choose to show these in the tables list when the connection is expanded by leaving the checkbox checked (default) or, alternatively, uncheck this checkbox so that empty tables are hidden.
    2. Show Views - database connections can have views in them, which are named queries that run on 1 or multiple tables in the database. By default this checkbox is unchecked and views are not shown when a connection is expanded. However, users can choose to show them by checking this checkbox.

Next, we will have a look at what the connections list looks like when the connections have been expanded:

  1. The Project Sandbox connection, which is a Postgres database underneath, has been expanded:
    1. There are multiple schemas present in the Project Sandbox database; the one that contains the tables and will be shown when expanded is the Starburst schema.
    2. We see that there are 2 tables here which have been populated by running the Customers from Shipments macro: the rawshipments table is the result of the Import task ("Import Raw Shipments") in the macro; it has 42.66k records. The customers table is the result of running the Run SQL task ("Create Unique Customers") and this has resulted in 1.33k unique customers.
  2. The Historical Shipments connection is the CSV File data connection connected to the shipments.csv file which contains raw shipment data. Since it is a CSV File connection, it has 1 table in it, which has the same name as the csv file it is connected to (shipments).
  3. This Cosmic Frog model connection is named Cosmic Frog NA Model. Cosmic Frog models are also Postgres databases underneath with a specific schema that Optilogic's Cosmic Frog application uses for optimizations (including network and transportation), simulations (including inventory), and Greenfield runs.
    1. The schema used for the tables in a Cosmic Frog model is called anura_2_8 and this schema is expanded in the connection to view the tables.
    2. In this example, we have chosen not to show empty tables and we see the first 4 populated tables in the list.

Viewing a Connection's Table

The tables within a connection can be opened within DataStar. They are then displayed in the central part of DataStar where the Macro Canvas is showing when a macro is the active tab.

Please note: currently, a data preview of up to 10,000 records for a table is displayed for tables in DataStar. This means that any filtering or sorting done on tables larger than 10k records is done on this subset of 10k records. At the end of this section it is explained how datasets containing more than 10k records per table can be explored by using the SQL Editor application.

  1. A table is opened in the central part of DataStar by clicking on it in the connections list. Here user clicked on the rawshipments table to open it. The tabs across the top of the central part where the table is now displayed have the name of the table or macro that is open in that tab on them. Users can switch between tables and macros by clicking on the tabs. Currently, the rawshipments table and the customers table are both open, with the rawshipments table being shown since that is the active tab.
  2. At the moment, DataStar will show a preview of up to 10,000 records of any table. The total number of records in the table is also mentioned here. As mentioned above, this also means that any filtering or sorting is performed on this subset of up to 10k records.
  3. An additional menu named Table Functions is available in the toolbar when a table is open in DataStar's central part. Options from this menu are:
    1. Export to CSV - this will export the table to a csv file which will be accessible from user's Downloads area on the Optilogic platform (click on your username at the right top of the screen and select Downloads from the drop-down list). Note to check back in a few minutes if you do not see the download there immediately.
    2. Open in SQL Editor - for databases, this will open the database in the SQL Editor application on the Optilogic platform and show the table that was active in DataStar. A screenshot of a DataStar project sandbox database in SQL Editor and a link to a Help Center article on the SQL Editor application are included at the end of this section.
  4. This grid can be customized by the user (e.g. sort and change the order of columns), see the appendix on how to do this.
  5. Users can also filter the grid based on values in one or multiple columns. The next screenshot covers this in more detail.
  6. On the right-hand side, there are 2 panes available that will become visible when clicking on their names: 1) Columns: to configure which columns are shown in the grid and in which order, and 2) Filters: filters can also be configured from this fold out pane. Each of these are covered in a screenshot further below in this section. Once a pane has been opened, it can be closed again by clicking on its name on the right-hand side of the pane.

A table can be filtered based on values in one or multiple columns:

  1. A column that has been filtered can be recognized by the blue filter icon to the right of the column name. This filter icon is black when not filtering on this column. Clicking on the filter icon brings up a form where filters can be configured.
  2. Currently, the product name field is filtered for records where the product name contains the text "chair". This is non-case sensitive and all records that have "chair" in their name (whether at the start, at the end or somewhere in the middle) will be filtered out and shown. Please note that:
    1. Once user hits the Enter key after typing into the Filter... text box the filter is applied.
    2. To remove a filter, user needs to delete the text from the Filter... text box.
  3. A filter can consist of multiple parts and whether only records that satisfy all filter parts are shown or records that satisfy at least one of the parts are shown depends on the selection of "AND" vs "OR" here. When using AND, only records that satisfy all filter parts will be shown. When using OR, records that satisfy at least one of the filter parts will be shown.
  4. Besides filtering records for their values containing certain text (see bullet 2 above), there are additional options available as shown in this drop-down list. After selecting the desired option, user can type in the Filter... text box (not visible in the above screenshot as it is covered by the filter type drop-down list). The drop-down list shown in the above screenshot is for columns of string/text data type. Different options are available for columns containing numerical data, as shown here for the Units column:

Columns can be re-ordered and hidden/shown as described in the Appendix; this can be done using the Columns fold-out pane too:

  1. Click on Columns on the right-hand side of the table to open the Columns pane.
  2. To find the column(s) of interest quickly, user can type into the Search... text box to filter the list of columns down to those containing the typed text in their name.
  3. These checkboxes are used to hide/show columns in the grid: uncheck a column's checkbox to hide it. Note that the checkbox at the top of the list can be used to hide/show all columns with one click.
  4. The order of the columns in the grid can be changed by clicking on the icon with 4x3 dots, then hold the mouse down and drag the column up or down. Let go of the mouse once the column is in the desired position.

Finally, filters can also be configured from a fold-out pane:

  1. Click on Filters on the right-hand side of the table to open the Filters pane.
  2. To find the column(s) you want to filter on quickly, you can type into the Search... text box to filter the list of columns down to those containing the typed text in their name.
  3. Click on the greater than icon to the left of the column name that you want to apply the filter to so that it expands and the filter configuration for the column becomes visible. Configure the filter as covered above by selecting the filter type from the drop-down and typing the filter criterion into the Filter... text box.
  4. A column that has a filter applied to it already can be recognized in the list: it has a filter icon to the right of its column name whereas unfiltered columns have no such icon displayed.

Users can explore the complete dataset of connections with tables larger than 10k records in other applications on the Optilogic platform, depending on the type of connection:

  • Lightning Editor: for CSV files
  • SQL Editor: for Postgres DB connections, which includes the Project Sandbox and Cosmic Frog models. See this SQL Editor Overview help center article on how to use the SQL Editor. For the Project Sandbox, please note that:
    • The name of the Project Sandbox database is the same as the project name
    • The tables that are created in the sandbox can be found under the “starburst” schema

Here is how to find the database and table(s) of interest on SQL Editor:

  1. When logged into the Optilogic platform, click on the SQL Editor icon in the list of available applications on the left-hand side to open it. Your SQL Editor icon may be in a different location in the list, and if it is not visible at all, then click on the icon with 3 horizontal dots to show any applications that are not shown currently.
  2. Either use the Search box at the top to find your database of interest or scroll through the list. DataStar project sandbox databases can be recognized by the DataStar logo to the left of the database name. The name of a DataStar project sandbox database is that of the DataStar project it belongs, in our example "Import Historical Shipments".
  3. When expanding the database, the starburst schema which contains the tables contained in the data connection will by default be expanded too.
  4. We see the customers and raswhipments tables that were the result of running the Customers from Shipments macro. Clicking on a table will run a query to show the first 20 records of that table.

Helpful Resources

Here are a few additional links that may be helpful:

We hope you are as excited about starting to work with DataStar as we are! Please stay tuned for regular updates to both DataStar and all the accompanying documentation. As always, for any questions or feedback, feel free to contact our support team at support@optilogic.com.

Appendix - Customizing Grids

The grids used in DataStar can be customized and we will cover the options available through the screenshot below. This screenshot is of the list of CSV files in user's Optilogic account when creating a new CSV File connection. The same grid options are available on the grid in the Logs tab and when viewing tables that are part of any Data Connections in the central part of DataStar.

  1. The columns in the grid can be dragged to change the order of the columns, and they can also be resized by clicking on the vertical bar in between the columns (the mouse then changes to 2 arrows pointing away from each other), holding the mouse down and moving right or left. Double-clicking while hovering over the vertical bar (mouse has changed to 2 arrows pointing away from each other) will autosize the column to fit the longest value.
  2. The grid can be sorted by the values of a column by clicking on its column name; this will sort the column in ascending order. Clicking on the column name will change the sort to be in descending order and clicking a third time takes the sort off the column. Sorting by multiple columns is possible too: sort the first column as desired, then hold down the Ctrl and Shift keys while clicking on the name(s) of a second, third, etc. column to add them to the multi-sort. Numbers indicate the order of the sort. Here, the grid was first sorted by the Location column and then by File Name.
  3. Clicking on the icon with 3 vertical dots to the right of a column name will bring up a context menu with the following options:
    1. Sort Ascending / Sort Descending / Clear Sort: depending on if the column is sorted and if so, how, 2 of these 3 options will be listed for each column to quickly change / take off the sort on this column.
    2. Pin Column: columns can be put in a fixed position that will stay visible when scrolling. Options are to pin the column all the way to the left or all the way to the right of the grid.
    3. Autosize This column: change the width of the column to fit the longest value.
    4. Autosize All Columns: change the width of all columns in the grid to fit their longest values.
    5. Choose Columns: brings up the list of columns present in the grid with the options to 1) hide them by unchecking their checkboxes and / or 2) change the column order by dragging columns to different positions in the list.
    6. Reset Columns: unhides all columns if any are hidden and puts them in their original order.

Appendix - Leapfrog Features & Limitations

Current Version Features:

  • Text to SQL generation capabilities through the “Run SQL” task. Currently, the following types of SQL queries are supported:
    • Insert
    • Update
    • Delete
    • Create Table
    • Alter Table
    • Union
  • Supported data connections: Project Sandbox (Starburst schema) relevant to the DataStar project.
  • Output formats - each prompt response typically contains:
    • Description on what Leapfrog creates in the Documentation part
    • Run SQL task
    • Options to Run Task (not yet functional) or Add to Macro
  • Multi-turn conversation: Leapfrog has a ‘memory’ within each conversation. This means user can reference a previous prompt or response in a subsequent request.
  • Conversation history is kept and user can go back to previous conversations to for example add any Run SQL tasks to a macro that were not added yet or continue a conversation from where it was left off.
  • Multi-language support: users can submit their prompt in languages other than English, and the Documentation part of the response will be in the same language.
  • Leapfrog knows the Anura schema, which Cosmic Frog models use, and hereby facilitates the creation of tables with the same schema. These can next be exported to Cosmic Frog models (Export task coming soon!) and used without further changes/updates needed before using the model in Cosmic Frog.

Current Limitations

  • All prompts will result in a SQL task in this initial Early Adopter release. In the future, Leapfrog will try and suggest a no-code task unless you include keywords like 'write,' 'generate,' 'create,' or 'give me’ SQL (task) in your prompt.
  • Each task created from Leapfrog can only use 1 connection at a time, and this connection is the Project Sandbox (Starburst). Moving data from one connection to another is not yet supported.
  • Leapfrog in DataStar cannot answer questions about its own capabilities yet.

Appendix - Leapfrog Data Usage & Security

Training Data

Leapfrog's brainpower comes from:

  • Optilogic's Anura schema
  • Hundreds of handcrafted SQL examples from real supply chain experts

All training processes are owned and managed by Optilogic — no outside data is used.

Using Leapfrog

When you ask Leapfrog a question:

  • It securely accesses your data through an API.
  • Your data stays yours — no external sharing or external training.

Conversation History

Your conversations (prompts, answers, feedback) are stored securely at the user level.

  • Only you and authorized Optilogic personnel can view your history.
  • Other users cannot access your data.

Privacy and Ownership

  • You retain full ownership of your model data. Only authorized Optilogic personnel can access it.
  • Optilogic uses industry-standard security protocols to keep everything safe and sound.

Have More Questions?

Scalability Icon

Contact Support

Get in touch
Scalability Icon

Contact Sales

Get in touch
Scalability Icon

Visit Frogger Pond Community

Visit our Community