This video guides you though creating your free account and the features of the Optilogic Cosmic Frog supply chain design platform.
If you are running into issues receiving your account confirmation email, please see the troubleshooting article linked here.
Greenfield analysis (GF) is a method for determining the optimal location of facilities in a supply chain network. The Greenfield engine in Cosmic Frog is called Triad and this name comes from the oldest known species of frogs – Triadobatrachus. You can think of it as the starting point for the evolution of all frogs, and it serves as a great starting point for modeling projects too! We can use Triad to identify 3 key parameters:
GF is a great starting point for network design—it solves quickly and can reduce the number of candidate site locations in complicated design problems. However, a standard GF requires some assumptions to solve (e.g. single time period, single product). As a result, the output of a Triad model is best suited as initial information for building a more robust Cosmic Frog optimization (Neo) or simulation (Throg) model.
You can run GF in any Cosmic Frog model. Running a GF model only requires two input tables to be populated:

A third important table for running GF is the Greenfield Settings table in the Functional Tables section of the input tables. We call our GF approach “Intelligent Greenfield” because of the different parameters available by configuring this settings table. The Greenfield Settings table is always populated with defaults and users can change these as needed. See the Greenfield Setting Explained help article for an explanation of the fields in this table.
A greenfield analysis starts with clicking the “Run” button at the right top of the Cosmic Frog application, just like a Neo or Throg model.

After clicking on the Run button, the Run screen comes up:

Besides making changes to values in the Customers and/or Customer Demand tables, GF scenarios often make changes to 1 or multiple settings on the Greenfield Settings table. The next screenshot shows an example of this:

To improve the solve speed of a Triad model, we can use customer clustering. Customer clustering reduces the size of the supply chain by grouping customers within a given geometric range into a single customer. We can set the clustering radius (in miles) in the Greenfield Settings table in the Customer Cluster Radius column.

Clustering is optional, and leaving this column blank is the same as turning off clustering.
While grouping customers can significantly improve the run time of the model, clustering may result in a loss of optimality. However, Greenfield is typically used as a starting point for a future Neo optimization model, so small losses in optimality at this phase are typically manageable.
Once you have run a model, you can visualize your results using the Analytics tab.

In Analytics, a dashboard is a collection of visualizations. Visualizations can take on many forms, such as charts, tables or maps.
In Cosmic Frog, there are default dashboards available to help you analyze your model results.


The default dashboards highlight some common analytics and metrics. They are designed to be interacted with through a set of filters.

We can hover over visualization elements to get more information. This floating card of information is called a “Tooltip”.

We can customize existing dashboards to fit our needs. For more information see Editing Dashboards and Visualizations.

Showing supply chains on maps is a great way to visualize them, to understand differences between scenarios, and to show how they evolve over time. Cosmic Frog offers users many configuration options to customize maps to their exact needs and compare them side-by-side. In this documentation we will cover how to create and configure maps in Cosmic Frog.
In Cosmic Frog, a map represents a single geographic visualization composed of different layers. A layer is an individual supply chain element such as a customer, product flow, or facility. To show locations on a map, these need to exist in the master tables (e.g. Customers, Facilities, and Suppliers) and they need to have been geocoded (see also the How to Geocode Locations section in this help center article). Flow based layers are based on output tables, such as the OptimizationFlowSummary or SimulationFlowSummary and to draw these, the model needs to have been run so outputs are present in these output tables.
Maps can be accessed through the Maps module in Cosmic Frog:

The Maps module opens and shows the first map in the Maps list; this will be the default pre-configured “Supply Chain” map for maps the user created and most models copied from the Resource Library:

In addition to what is mentioned under bullet 4 of the screenshot just above, users can also perform following actions on maps:

As we have seen in the screenshot above, the Maps module opens with a list of pre-configured maps and layers on the left-hand side:

The Map menu in the toolbar at the top of the Maps module allows users to perform basic map and layer operations:

These options from the Map menu are also available in the context menu that comes up when right-clicking on a map or layer in the Maps list.
The Map Filters panel can be used to set scenarios for each map individually. If users want to use the same scenario for all maps present in the model, they can use the Global Scenario Filter located in the toolbar at the top of the Maps module:

Now all maps in the model will use the selected scenario, and the option to set the scenario at the map-level is disabled.
When a global scenario has been set, it can be removed using the Global Scenario Filter again:

The zoom level, how the map is centered, and the configuration of maps and their layers persist. After moving between other modules within Cosmic Frog or switching between models, when user comes back to the map(s) in a specific model, the map settings are the same as when last configured.
Now let us look at how users can add new maps, and the map configuration options available to them.

Once done typing the name of the new map, the panel on the right-hand side of the map changes to the Map Filters panel which can be used to select the scenario and products the map will be showing. If the user wants to see a side-by-side map comparison of 2 scenarios in the model, this can be configured here too:

In the screenshot above, the Comparison toggle is hidden by the Product drop-down. In the next screenshot it is shown. By default, this toggle is off; when sliding it right to be on, we can configure which scenario we want to compare the previously selected scenario to:

Please note:
Instead of setting which scenario to use for each map individually on the Map Filters panel, users can instead choose to set a global scenario for all maps to use, as discussed above in the Global Scenario Filter section. If a global scenario is set, the Scenario drop-down on the Map Filters panel will be disabled and the user cannot open it:

On the Map Information panel, users have a lot of options to configure what the map looks like and what entities (outside of the supply chain ones configured in the layers) are shown on it:

Users can choose to show a legend on the map and configure it on the Map Legend pane:

To start visualizing the supply chain that is being modelled on a map, user needs to add at least 1 layer to a map, which can be done by choosing “New Layer” from the Map-menu:

Once a layer has been added or is selected in the Maps list, the panel on the right-hand side of the map changes to the Condition Builder panel which can be used to select the input or output table and any filters on it to be used to draw the layer:

We will now also look at using the Named Filters option to filter the table used to draw the map layer:

In this walk-through example, user chooses to enable the “DC1 and DC2” named filter:

Lastly on the Named Filters option, users have the option to view a grid preview to ensure the correct filtered records are being drawn on the map:

In the next layer configuration panel, Layer Style, users can choose what the supply chain entities that the layer shows will look like on the map. This panel looks somewhat different for layers that show locations (Type = Point) than for those that show flows (Type = Line). First, we will look at a point type layer (Customers):

Next, we will look at a line type layer, Customer Flows:

At the bottom of the Layer Style pane a Breakpoints toggle is available too (not shown in the screenshots above). To learn more about how these can be used and configured, please see the "Maps - Styling Points & Flows based on Breakpoints" Help Center article.
Labels and tooltips can be added to each layer, so users can more easily see properties of the entities shown in the layer. The Layer Labels configuration panel allows users to choose what to show as labels and tooltips, and configure the style of the labels:

When modelling multiple periods in network optimization (Neo) models, users can see how these evolve over time using the map:

Users can now add Customers, Facilities and Suppliers via the map:

After adding the entity, we see it showing on the map, here as a dark blue circle, which is how the Customers layer is configured on this map:

Looking in the Customers table, we notice that CZ_Philadelphia has been added. Note that while its latitude and longitude fields are set, other fields such as City, Country and Region are not automatically filled out for entities added via the map:

In this final section, we will show a few example maps to give users some ideas of what maps can look like. In this first screenshot, a map for a Transportation Optimization (Hopper engine) model, Transportation Optimization UserDefinedVariables available from Optilogic’s Resource Library (here), is shown:

Some notable features of this map are:
The next screenshot shows a map of a Greenfield (Triad engine) model:

Some notable features of this map are:
This following screenshot shows a subset of the customers in a Network Optimization (Neo engine) model, the Global Sourcing – Cost to Serve model available from Optilogic’s Resource Library (here). These customers are color-coded based on how profitable they are:

Some notable features of this map are:
Lastly, the following screenshot shows a map of the Tariffs example model, a network optimization (Neo engine) model available from Optilogic’s Resource Library (here), where suppliers located in Europe and China supply raw materials to the US and Mexico:

Some notable features of this map are:
We hope users feel empowered to create their own insightful maps. For any questions, please do not hesitate to contact Optilogic support at support@optilogic.com.
Scenarios allow you to run edited versions of your model in specific, controlled ways.
All scenarios have 3 components:

For an overview of the scenario features with a video please watch the video:
The only constant is change. When building our supply chains, the “optimal” design doesn’t only mean lowest cost. What happens if (or perhaps when) a disruption occurs? Fragile, low-cost supply chains can end up costing more in the long run if they aren’t resilient to the dynamic nature of today’s world.
We believe that optimality includes resilience. That’s why every Cosmic Frog run includes a risk rating from our DART risk engine.
Every Cosmic Frog run outputs an Opti Risk score. The Opti Risk score is an aggregate measure of the overall supply chain risk. It includes the following sub-categories:

After running a model, you can find the Opti Risk score (as well as the scores for each of the sub-categories) in the output risk tables. The Opti Risk score can also be found in the OptimizationNetworkSummary table.


The overall Customer Risk score is an aggregation of each individual customer’s risk described in the OptimizationCustomerRiskMetrics or SimulationCustomerRiskMetrics tables. In each scenario, there is one risk score per customer per period.
Each customer risk score includes:

For each sub-category, the geographic risk score is also an aggregation of several risk factors:

Like the customer risk score, the overall facility risk score is an aggregation of risk across all facilities in your supply chain. In the FacilityRiskMetric tables, there is an individual risk score per facility per period.
The facility risk score includes:
The capacity risk has three sub-components:
The facility geographic risk has the same components as the customer geographic risk.


The supplier risk is calculated per supplier per period and includes:
Both the concentration and geographic risks include the same elements as described previously.

Network risk differs from the other risk scores in that it is not tied to a specific supply chain element. There is only one network risk score per scenario, and it includes:

The transport and import/export time risks are aggregated across individual origin/destination pairs for every product and transport mode. The individual risk scores can be found in the OptimizationFlowSummary table.

The stocking point count and supply make count risks are aggregations across every product and period. The individual risk scores can be found in the ProductRiskMetrics tables.

We can use our visualization tools to get a better sense of how risk varies across design scenarios.


Please watch this 5-minute video for an overview of DataStar, Optilogic’s new AI-powered data application designed to help supply chain teams build and update models & scenarios and power apps faster & easier than ever before!
For detailed DataStar documentation, please see Navigating DataStar on the Help Center.
DataStar is Optilogic’s new AI-powered data product designed to help supply chain teams build and update models & scenarios and power apps faster & easier than ever before. It enables users to create flexible, accessible, and repeatable workflows with zero learning curve—combining drag-and-drop simplicity, natural language AI, and deep supply chain context.
Today, up to an estimated 80% of a modeler's time is spent on data—connecting, cleaning, transforming, validating, and integrating it to build or refresh models. DataStar drastically shrinks that time, enabling teams to:
The 2 main goals of DataStar are 1) ease of use, and 2) effortless collaboration, these are achieved by:
In this documentation, we will start with a high-level overview of the DataStar building blocks. Next, creating projects and data connections will be covered before diving into the details of adding tasks and chaining them together into macros, which can then be run to accomplish the data goals of your project.
Please see this "Getting Started with DataStar: Application Overview" video for a quick 5-minute overview of DataStar.
Before diving into more details in later sections, this section will describe the main building blocks of DataStar, which include Data Connections, Projects, Macros, and Tasks.
Since DataStar is all about working with data, Data Connections are an important part of DataStar. These enable users to quickly connect to and pull in data from a range of data sources. Data Connections in DataStar:
Connections to other common data resources such as MySQL, OneDrive, SAP, and Snowflake will become available as built-in connection types over time. Currently, these data sources can be connected to by using scripts that pull them in from the Optilogic side or using ETL tools or automation platforms that push data onto the Optilogic platform. Please see the "DataStar: Data Integration" article for more details on working with both local and external data sources.
Users can check the Resource Library for the currently available template scripts and utilities. These can be copied to your account or downloaded and after a few updates around credentials, etc. you will be able to start pulling data in from external sources:

Projects are the main container of work within DataStar. Typically, a Project will aim to achieve a certain goal by performing all or a subset of importing specific data, then cleansing, transforming & blending it, and finally publishing the results to another file/database. The scope of DataStar Projects can vary greatly, think for example of following 2 examples:
Projects consist of one or multiple macros which in turn consist of 1 or multiple asks. Tasks are the individual actions or steps which can be chained together within a macro to accomplish a specific goal.
The next screenshot shows an example Macro called "Transportation Policies" which consists of 7 individual tasks that are chained together to create transportation policies for a Cosmic Frog model from imported Shipments and Costs data:

Every project by default contains a Data Connection named Project Sandbox. This data connection is not global to all DataStar projects; it is specific to the project it is part of. The Project Sandbox is a Postgres database where users generally import the raw data from the other data connections into, perform transformations in, save intermediate states of data in, and then publish the results out to a Cosmic Frog model (which is a data connection different than the Project Sandbox connection). It is also possible that some of the data in the Project Sandbox is the final result/deliverable of the DataStar Project or that the results are published into a different type of file or system that is set up as a data connection rather than into a Cosmic Frog model.
The next diagram shows how Data Connections, Projects, and Macros relate to each other in DataStar:

As referenced above too, to learn more about working with both local and external data, please see this "DataStar: Data Integration" article.
On the start page of DataStar, the user will be shown their existing projects and data connections. They can be opened, or deleted here, and users also have the ability to create new projects and data connections from this start page.
The next screenshot shows the existing projects in card format:

New projects can be created by clicking on the Create Project button in the toolbar at the top of the DataStar application:

If on the Create Project form a user decides they want to use a Template Project rather than a new Empty Project, it works as follows:

These template projects are also available on Optilogic's Resource Library:

After the copy process completes, we can see the project appear in the Explorer and in the Project list in DataStar:

Note that any files needed for data connections in template projects copied from the Resource Library can be found under the "Sent to Me" folder in the Explorer. They will be in a subfolder named @datastartemplateprojects#optilogic (the sender of the files).
The next screenshot shows the Data Connections that have already been set up in DataStar in list view:

New data connections can be created by clicking on the Create Data Connection button in the toolbar at the top of the DataStar application:

The remainder of the Create Data Connection form will change depending on the type of connection that was chosen as different types of connections require different inputs (e.g. host, port, server, schema, etc.). In our example, the user chooses CSV Files as the connection type:

In our walk-through here, the user drags and drops a Shipments.csv file from their local computer on top of the Drag and drop area:

Now let us look at a project when it is open in DataStar. We will first get a lay of the land with a high-level overview screenshot and then go into more detail for the different parts of the DataStar user interface:

Next, we will dive a bit deeper into a macro:

The Macro Canvas for the Customers from Shipments macro is shown in the following screenshot:

In addition to the above, please note following regarding the Macro Canvas:

We will move on to covering the 2 tabs on the right-hand side pane, starting with the Tasks tab. Keep in mind that in the descriptions of the tasks below, the Project Sandbox is a Postgres database connection. The following tasks are currently available:

Users can click on a task in the tasks list and then drag and drop it onto the macro canvas to incorporate it into a macro. Once added to a macro, a task needs to be configured; this will be covered in the next section.
When adding a new task, it needs to be configured, which can be done on the Configuration tab. When a task is newly dropped onto the Macro Canvas its Configuration tab is automatically opened on the right-hand side pane. To make the configuration tab of an already existing task active, click on the task in the Macros tab on the left-hand side pane or click on the task in the Macro Canvas. The configuration options will differ by type of task, here the Configuration tab of an Import task is shown as an example:


Please note that:
The following table provides an overview of what connection type(s) can be used as the source / destination / target connection by which task(s), where PG is short for a PostgreSQL database connection and CF for a Cosmic Frog model connection:

Leapfrog in DataStar (aka D* AI) is an AI-powered feature that transforms natural language requests into executable DataStar Update and Run SQL tasks. Users can describe what they want to accomplish in plain language, and Leapfrog automatically generates the corresponding task query without requiring technical coding skills or manual inputs for task details. This capability enables both technical and non-technical users to efficiently manipulate data, build Cosmic Frog models, and extract insights through conversational interactions with Leapfrog within DataStar.
Note that there are 2 appendices at the end of this documentation where 1) details around Leapfrog in DataStar's current features & limitations are covered and 2) Leapfrog's data usage and security policies are summarized.


Leapfrog’s response to this prompt is as follows:

DROP TABLE IF Exists customers;
CREATE TABLE customers AS
SELECT
destination_store AS customer,
AVG(destination_latitude) AS latitude,
AVG(destination_longitude) AS longitude
FROM rawshipments
GROUP BY destination_storeTo help users write prompts, the tables present in the Project Sandbox and their columns can be accessed from the prompt writing box by typing an @:


This user used the @ functionality repeatedly to write their prompt as follows, which helped to generate their required Run SQL task:

Now, we will also have a look at the Conversations tab while showing the 2 tabs in Split view:

Within a Leapfrog conversation, Leapfrog remembers the prompts and responses thus far. Users can therefore build upon previous questions, for example by following up with a prompt along the lines of “Like that, but instead of using a cutoff date of August 10, 2025, use September 24, 2025”.
Additional helpful DataStar Leapfrog links:
Users can run a Macro by selecting it and then clicking on the green Run button at the right top of the DataStar application:

Please note that:

Next, we will cover the Logs tab at the bottom of the Macro Canvas where logs of macros that are running/have been run can be found:

When a macro has not yet been run, the Logs tab will contain a message with a Run button, which can also be used to kick off a macro run. When a macro is running or has been run, the log will look similar to the following:

The next screenshot shows the log of a run of the same macro where the third task ended in an error:

The progress of DataStar macro and task runs can also be monitored in the Run Manager application where runs can be cancelled if needed too:

Please note that:
In the Data Connections tab on the left-hand side pane the available data connections are listed:

Next, we will have a look at what the connections list looks like when the connections have been expanded:

The tables within a connection can be opened within DataStar. They are then displayed in the central part of DataStar where the Macro Canvas is showing when a macro is the active tab.
Please note:

A table can be filtered based on values in one or multiple columns:


Columns can be re-ordered and hidden/shown as described in the Appendix; this can be done using the Columns fold-out pane too:

Finally, filters can also be configured from a fold-out pane:

Users can explore the complete dataset of connections with tables larger than 10k records in other applications on the Optilogic platform, depending on the type of connection:
Here is how to find the database and table(s) of interest in SQL Editor:

Here are a few additional links that may be helpful:
We hope you are as excited about starting to work with DataStar as we are! Please stay tuned for regular updates to both DataStar and all the accompanying documentation. As always, for any questions or feedback, feel free to contact our support team at support@optilogic.com.
The grids used in DataStar can be customized and we will cover the options available through the screenshot below. This screenshot is of the list of CSV files in user's Optilogic account when creating a new CSV File connection. The same grid options are available on the grid in the Logs tab and when viewing tables that are part of any Data Connections in the central part of DataStar.

Leapfrog's brainpower comes from:
All training processes are owned and managed by Optilogic — no outside data is used.
When you ask Leapfrog a question:
Your conversations (prompts, answers, feedback) are stored securely at the user level.
Exciting tools that drastically shorten the time spent wrangling data, building supply chain models for Cosmic Frog, and analyzing outputs of these models are now available on the Optilogic platform.
This documentation briefly explains how to access these AI Agents and Utilities, lists the available tools with a short description of each, and provides links to detailed documentation for several of these tools.
Before we dive into how to access the AI Agents & Utilities, here are a few links you may find helpful:
Currently, the available Agents and Utilities are accessed by using Run Utility tasks in DataStar. At a high level, the steps are as follows (screenshots follow beneath):
Your macro canvas will look similar to the following screenshot after step #4:

After adding a task, its configuration tab is automatically shown on the right-hand side. Give the task a name, and then select the Agent or Utility you want to use from the list of available Agents/Utilities in the Select Utility section. You can also use the Search box to quickly find any Agent/Utility that contains certain text in its name or description. Hover over the description of an Agent/Utility to see the full description in case it is not entirely visible:

Once an Agent/Utility has been selected by clicking on it, the Configure Utility section becomes available. The inputs here will differ based on the Agent/Utility that has been selected. In the next screenshot the Configure Utility section of the Duplicate Macro utility is shown:

Provide the inputs for at least the required parameters, and if desired for any optional ones. Note that hovering over a blue question mark icon will bring up a hover box with a description of the parameter. For example, hovering over the blue question mark of the Source Macro Name parameter brings up "Name of the macro to duplicate (case-sensitive)".
The following AI Agents and Utilities are currently available. More are being added as they come available. For each a short description is given and for those that have more detailed documentation to go with them, a link to this documentation is included.
DataStar users typically will want to use data from a variety of sources in their projects. This data can be in different locations and systems and there are multiple methods available to get the required data into the DataStar application. In this documentation we will describe the main categories of data sources users may want to use and the possible ways of making these available in DataStar for usage.
If you would first like to learn more about DataStar before diving into data integration specifics, please see the Navigation DataStar articles on the Optilogic Help Center.
The following diagram shows different data sources and the data transfer pathways to make them available for use in DataStar:

We will dive a bit deeper into making local data available for use in DataStar building upon what was covered under bullets 5a-5c in the previous screenshot. First, we will familiarize ourselves with the layout of the Optilogic platform:

Next, we will cover the 3 steps to go from data sitting on a user’s computer locally to being able to use it in DataStar in detail through the next set of screenshots. At a high-level the steps are:
To get local data onto the Opitlogic platform, we can use the file / folder upload option:


Select either the file(s) or folder you want to upload by browsing to it/them. After clicking on Open, the File Upload form will be shown again:

Note that files in the upload list that will not cause name conflicts can also be renamed or removed from the list if so desired. This can for example be convenient when wanting to upload most files in a folder, except for a select few. In that case use the Add Folder option and in the list that will be shown, remove the few that should not be uploaded rather than using Add Files and then manually selecting almost all files in a folder.
Once the files are uploaded, you will be able to see them in the Explorer by expanding the folder they were uploaded to or searching for (part of) their name using the Search box.
The second step is to then make these files visible to DataStar by setting up Data Connections to them:

After setting up a Data Connection to a Cosmic Frog model and to a CSV file, we can see the source files in the Explorer, and the Data Connections pointing to these in DataStar side-by-side:

To start using the data in DataStar, we need to take the third step of importing the data from the data connections into a project. Typically, the data will be imported into the Project Sandbox, but this could also be into another Postgres database, including a Cosmic Frog model. Importing data is done using Import tasks; the Configuration tab of one is shown in this next screenshot:

The 3 steps described above are summarized in the following sequence of screenshots:

For a data workflow that is used repeatedly and needs to be re-run using the latest data regularly, users do not need to go through all 3 steps above of uploading data, creating/re-configuring data connections, and creating/re-configuring Import tasks to refresh local data. If the new files to be used have the same name and same data structure as the current ones, replacing the files on the Optilogic platform with the newer ones will suffice (so only step 1 is needed); the data connections and Import tasks do not need to be updated or re-configured. Users can do this manually or programmatically:
This section describes how to bring external data into DataStar using supported integration patterns where the data transfer is started from an external system, e.g. the data is “pushed” onto the Optilogic platform.
External systems such as ETL tools, automation platforms, or custom scripts can load data into DataStar through the Optilogic Pioneer API (please see the Optilogic REST API documentation for details). This approach is ideal when you want to programmatically upload files, refresh datasets, or orchestrate transformations without connecting directly to the underlying database.
Key points:
Please note that Optilogic has developed a Python library to facilitate scripting for DataStar. If your external system is Python based, you can leverage this library as a wrapper for the API. For more details on working with the library and a code example of accessing a DataStar project’s sandbox, see this “Using the DataStar Python Library” help center article.
Every DataStar project is backed by a PostgreSQL database. You can connect directly to this database using any PostgreSQL-compatible driver, including:
This enables you to write or update data using SQL, query the sandbox tables, or automate recurring loads. The same approach applies to both DataStar projects and Cosmic Frog models since both use PostgreSQL under the hood. Please see this help center article on how to retrieve connection strings for Cosmic Frog model and DataStar project databases; these will need to be passed into the database connection to gain access to the model / project database.
Several scripts and utilities to connect to common external data sources, including Databricks, Google Big Query, Google Drive, and Snowflake, are available on Optilogic’s Resource Library:

These utilities and scripts can function as a starting point to modify into your own desired script for connecting to and retrieving data from a certain data source. You will need to update authentication and connection information in the scripts and configure the user settings to your needs. For example, this is the User Input section of the “Databricks Data Import Script”:

The user needs to update following lines; others can be left at defaults and only updated if desired/required:
For any questions or feedback, please feel free to reach out to the Optilogic support team on support@optilogic.com.
In this quick start guide we will walk-through importing a CSV file into the Project Sandbox of a DataStar project. The steps involved are:
Our example CSV file is one that contains historical shipments from May 2024 through August 2025. There are 42,656 records in this Shipments.csv file, and if you want to follow along with the steps below you can download a zip-file containing it here (please note that the long character string at the beginning of the zip's file name is expected).
Open the DataStar application on the Optilogic platform and click on the Create Data Connection button in the toolbar at the top:

In the Create Data Connection form that comes up, enter the name for the data connection, optionally add a description, and select CSV Files from the Connection Type drop-down list:

If your CSV file is not yet on the Optilogic platform, you can drag and drop it onto the “Drag and drop” area of the form to upload it to the /My Files/DataStar folder. If it is already on the Optilogic platform or after uploading it through the drag and drop option, you can select it in the list of CSV files. Once selected it becomes greyed out in the list to indicate it is the file being used; it is also pinned at the top of the list with darker background shade so users know without scrolling which file is selected. Note that you can filter this list by typing in the Search box to quickly find the desired file. Once the file is selected, clicking on the Add Connection button will create the CSV connection:

After creating the connection, the Data Connections tab on the DataStar start page will be active, and it shows the newly added CSV connection at the top of the list (note the connections list is shown in list view here; the other option is card view):

You can either go into an existing DataStar project or create a new one to set up a Macro that will import the data from the Historical Shipments CSV connection we just set up. For this example, we create a new project by clicking on the Create Project button in the toolbar at the top when on the start page of DataStar. Enter the name for the project, optionally add a description, change the appearance of the project if desired by clicking on the Edit button, and then click on the Add Project button:

After the project is created, the Projects tab will be shown on the DataStar start page. Click on the newly created project to open it in DataStar. Inside DataStar, you can either click on the Create Macro button in the toolbar at the top or the Create a Macro button in the center part of the application (the Macro Canvas) to create a new macro which will then be listed in the Macros tab in the left-hand side panel. Type the name for the macro into the textbox:

When a macro is created, it automatically gets a Start task added to it. Next, we open the Tasks tab by clicking on tab on the left in the panel on the right-hand side of the macro canvas. Click on Import and drag it onto the macro canvas:

When hovering close to the Start task, it will be suggested to connect the new Import task to the Start task. Dropping the Import task here will create the connecting line between the 2 tasks automatically. Once the Import task is placed on the macro canvas, the Configuration tab in the right-hand side panel will be opened. Here users can enter the name for the task, select the data connection that is the source for the import (the Historical Shipments CSV connection), and the data connection that is the destination of the import (a new table named “rawshipments” in the Project Sandbox):

If not yet connected automatically in the previous step, connect the Import Raw Shipments task to the Start task by clicking on the connection point in the middle of the right edge of the Start task, holding the mouse down and dragging the connection line to the connection point in the middle of the left edge of the Import Raw Shipments task. Next, we can test the macro that has been set up so far by running it: either click on the green Run button in the toolbar at the top of DataStar or click on the Run button in the Logs tab at the bottom of the macro canvas:

You can follow the progress of the Macro run in the Logs tab and once finished examine the results on the Data Connections tab. Expand the Project Sandbox data connection to open the rawshipments table by clicking on it. A preview of the table of up to 10,000 records will be displayed in the central part of DataStar:

In this quick start guide we will show how Leapfrog AI can be used in DataStar to generate tasks from natural language prompts, no coding necessary!
This quick start guide builds upon the previous one where a CSV file was imported into the Project Sandbox, please follow the steps in there first if you want to follow along with the steps in this quick start. The starting point for this quick start is therefore a project named Import Historical Shipments that has a Historical Shipments data connection of type = CSV, and a table in the Project Sandbox named rawshipments, which contains 42,656 records.
The Shipments.csv file that was imported into the rawshipments table has following data structure (showing 5 of the 42,656 records):

Our goal in this quick start is to create a task using Leapfrog that will use this data (from the rawshipments table in the Project Sandbox) to create a list of unique customers, where the destination stores function as the customers. Ultimately, this list of customers will be used to populate the Customers input table of a Cosmic Frog model. A few things to consider when formulating the prompt are:
Within the Import Historical Shipments DataStar project, click on the Import Shipments macro to open it in the macro canvas, you should see the Start and Import Raw Shipments tasks on the canvas. Then open Leapfrog by clicking on the Ask Leapfrog AI button to the right in the toolbar at the top of DataStar. This will open the Leapfrog tab where a welcome message will be shown. Next, we can write our prompt in the “Write a message…” textbox.

Keeping in mind the 5 items mentioned above, the prompt we use is the following: “Use the @rawshipments table to create unique customers (use the @rawshipments.destination_store column); average the latitudes and longitudes. Only use records with the @rawshipments.ship_date between July 1 2024 and June 30 2025. Match to the anura schema of the Customers table”. Please note that:
After clicking on the send icon to submit the prompt, Leapfrog will take a few seconds to consider the prompt and formulate a response. The response will look similar to the following screenshot, where we see from top to bottom:

For copy-pasting purposes, the resulting SQL Script is repeated here:
DROP TABLE IF EXISTS customers;
CREATE TABLE customers AS
SELECT
destination_store AS customername,
AVG(destination_latitude) AS latitude,
AVG(destination_longitude) AS longitude
FROM rawshipments
WHERE
TO_DATE(ship_date, 'DD/MM/YYYY') >= '2024-07-01'::DATE
AND TO_DATE(ship_date, 'DD/MM/YYYY') <= '2025-06-30'::DATE
GROUP BY destination_store;
Those who are familiar with SQL, will be able to tell that this will indeed achieve our goal. Since that is the case, we can click on the Add to Macro button at the bottom of Leapfrog’s response to add this as a Run SQL task to our Import Shipments macro. When hovering over this button, you will see Leapfrog suggests where to put it on the macro canvas and to connect it to the Import Raw Shipments task, which is what we want. When next clicking on the Add to Macro button it will be added.

We can test our macro so far, by clicking on the green Run button at the right top of DataStar. Please note that:
Once the macro is done running, we can check the results. Go to the Data Connections tab, expand the Project Sandbox connection and click on the customers table to open it in the central part of DataStar:

We see that the customers table resulting from running the Leapfrog-created Run SQL task contains 1,333 records. Also notice that its schema matches that of the Customers table of Cosmic Frog models, which includes columns named customername, latitude, and longitude.
Writing prompts for Leapfrog that will create successful responses (e.g. the SQL Script generated will achieve what the prompt-writer intended) may take a bit of practice. This Mastering Leapfrog for SQL Use Cases: How to write Prompts that get Results post on the Frogger Pond community portal has some great advice which applies to Leapfrog in DataStar too. It is highly recommended to give it a read; the main points of advice follow here too:
As an example, let us look at variations of the prompt we used in this quick start guide, to gauge the level of granularity needed for a successful response. In this table, the prompts are listed from least to most granular:
Note that in the above prompts, we are quite precise about table and column names and no typos are made by the prompt writer. However, Leapfrog can generally manage well with typos and often also pick up table and column names when not explicitly used in the prompt. So while generally being more explicit results in higher accuracy, it is not necessary to always be extremely explicit and we just recommend to be as explicit as you can be.
In addition, these example prompts do not use the @ character to specify tables and columns to use, but they could to facilitate prompt writing further.
In this quick start guide we will walk through the steps of exporting data from a table in the Project Sandbox to a table in a Cosmic Frog model.
This quick start guide builds upon a previous one where unique customers were created from historical shipments using a Leapfrog-generated Run SQL task. Please follow the steps in that quick start guide first if you want to follow along with the steps in this one. The starting point for this quick start is therefore a project named Import Historical Shipments, which contains a macro called Import Shipments. This macro has an Import task and a Run SQL task. The project has a Historical Shipments data connection of type = CSV, and the Project Sandbox contains 2 tables named rawshipments (42,656 records) and customers (1,333 records).
The steps we will walk through in this quick start guide are:
First, we will create a new Cosmic Frog model which does not have any data in it. We want to use this model to receive the data we export from the Project Sandbox.
As shown with the numbered steps in the screenshot below: while on the start page of Cosmic Frog, click on the Create Model button at the top of the screen. In the Create Frog Model form that comes up, type the model name, optionally add a description, and select the Empty Model option. Click on the Create Model button to complete the creation of the new model:

Next, we want to create a connection to the just created empty Cosmic Frog model in DataStar. To do so: open your DataStar application, then click on the Create Data Connection button at the top of the screen. In the Create Data Connection form that comes up, type the name of the connection (we are using the same name as the model, i.e. “Empty CF Model for DataStar Export”),optionally add a description, select Cosmic Frog Models in the Connection Type drop-down list, click on the name of the newly created empty model in the list of models, and click on Add Connection. The new data connection will now be shown in the list of connections on the Data Connections tab (shown in list format here):

Now, go to the Projects tab, and click on the “Import Historical Shipments” project to open it. We will first have a look at the Project Sandbox and the empty Cosmic Frog model connections, so click on the Data Connections tab:

The next step is to add and configure an Export Task to the Import Shipments macro. Click on the Macros tab in the panel on the left-hand side, and then on the Import Shipments macro to open it. Click on the Export task in the Tasks panel on the right-hand side and drag it onto the Macro Canvas. If you drag it close to the Run SQL task, it will automatically connect to it once you drop the Export task:

The Configuration panel on the right has now become the active panel:

Click on the AutoMap button, and in the message that comes up, select either Replace Mappings or Add New Mappings. Since we have not mapped anything yet, the result will be the same in this case. After using the AutoMap option, the mapping looks as follows:

We see that each source column is now mapped to a destination column of the same name. This is what we expect, since in the previous quick start guide, we made sure to tell Leapfrog when generating the Run SQL task for creating unique customers to match the schema of the customers table in Cosmic Frog models (“the Anura schema”).
If the Import Shipments macro has been run previously, we can just run the new Export Customers task by itself (hover over the task in the Macro Canvas and click on the play button that comes up), otherwise we can choose to run the full macro by clicking on the green Run button at the right top. Once completed, click on the Data Connections tab to check the results:

Above, the AutoMap functionality was used to map all 3 source columns to the correct destination columns. Here, we will go into some more detail on manually mapping and additional options users have to quickly sort and filter the list of mappings.

In this quick start guide we will walk through the steps of modifying data in a table in the Project Sandbox using Update tasks. These changes can either be made to all records in a table or a subset based on a filtering condition. Any PostgreSQL function can be used when configuring the update statements and conditions of Update tasks.
This quick start guide builds upon a previous one where unique customers were created from historical shipments using a Leapfrog-generated Run SQL task. Please follow the steps in that quick start guide first if you want to follow along with the steps in this one. The starting point for this quick start is therefore a project named “Import Historical Shipments”, which contains a macro called Import Shipments. This macro has an Import task and a Run SQL task. The project has a Historical Shipments data connection of type = CSV, and the Project Sandbox contains 2 tables named rawshipments (42,656 records) and customers (1,333 records). Note that if you also followed one of the other quick start guides on exporting data to a Cosmic Frog model (see here), your project will also contain an Export task, and a Cosmic Frog data connection; you can still follow along with this quick start guide too.
The steps we will walk through in this quick start guide are:
We have a look at the customers table which was created from the historical shipment data in the previous 2 quick start guides, see the screenshot below. Sorting on the customername column, we see that they are ordered in alphabetical order. This is because the customer name column is of type text as it starts with the string “CZ”. This leads to them not being ordered based on the number part that follows the “CZ” prefix.

If we want ordering customer names alphabetically to result in an order that is the same as sorting the number part of the customer name, we need to make sure each customer name has the same number of digits. We will use Update tasks to change the format of the number part of the customer names so that they are all 4 digits by adding leading 0’s to those that have less than 4 digits. While we are at it, we will also replace the “CZ” prefix with “Cust_” to make the data consistent with other data sources that contain customer names. We will break the updates to the customer name column up into 3 steps using 3 Update tasks initially. At the end, we will see how they can be combined into a single Update task. The 3 steps are:
Let us add the first Update task to our Import Shipments macro:

After dropping the Update task onto the macro canvas, its configuration tab will be opened automatically on the right-hand side:

If you have not already, click on the plus button to add your first update statement:

Next, we will write the expression for which we can use the Expression Builder area just below the update statements table. What we type there will also be added to the Expression column of the selected Update Statement. These expressions can use any PostgreSQL function, also those which may not be pre-populated in the helper lists. Please see the PostgreSQL documentation for all available functions.

When clicking in the Expression Builder, an equal sign is already there, and a list of items comes up. At the top are the columns that are present in the target table and below those is a list of string functions which we can select to use. Here, the functions shown are string functions, since we are working on a text type column, when working on column with a different data type, other functions, those relevant to the data type, will be shown. We will select the last option shown in the screenshot, the substring function, since we want to first remove the “CZ” from the start of the customer names:

The substring function needs at least 2 arguments, which will be specified in the parentheses. The first argument needs to be the customername column in our case, since that is the column containing the string values we want manipulate. After typing a “c”, the customername column and 2 functions starting with “c” are suggested in the pop-up list. We choose the customername column. The second argument specifies the start location from where we want to start the substring. Since we want to remove the “CZ”, we specify 3 as the start location, leaving characters number 1 and 2 off. The third argument is optional; it indicates the end location of the substring. We do not specify it, meaning we want to keep all characters starting from character number 3:

We can run this task now without specifying a Condition (see section further below) in which case the expression will be applied to all records in the customers table. After running the task, we open the customers table to see the result:

We see that our intended change was made. The “CZ” is removed from the customer names. Sorted alphabetically, they still are not in increasing order of the number part of their name. Next, we use the lpad (left pad) function to add leading zeroes so all customer names consist of 4 digits. This function has 3 arguments: the string to apply the left padding to (the customername column), the number of characters the final string needs to have (4), and the padding character (‘0’).

After running this task, the customername column values are as follows:

Now with the leading zeroes and all customer names being 4 characters long, sorting alphabetically results in the same order as sorting by the number part of the customer name.
Finally, we want to add the prefix “Cust_”. We use the concat (concatenation) function for this. At first, we type Cust_ with double quotes around it, but the squiggly red line below the expression in the expression builder indicates this is not the right syntax. Hovering over the expression in the expression builder explains the problem:

The correct syntax for using strings in these functions is to use single quotes:

Instead of concat we can also use “= ‘Cust_’ || customername” as the expression. The double pipe symbol is used in PostgreSQL as the concatenation operator.
Running this third update task results in the following customer names in the customers table:

Our goal of how we wanted to update the customername column has been achieved. Our macro now looks as follows with the 3 Update tasks added:

The 3 tasks described above can be combined into 1 Update task by nesting the expressions as follows:

Running this task instead of the 3 above will result in the same changes to the customername column in the customers table.
Please note that in the above we only specified one update statement in each Update task. You can add more than one update statement per update task, in which case:
As mentioned above, the list of suggested functions is different depending on the data type of the column being updated. This screenshot shows part of the suggested functions for a number column:

At the bottom of Expression Builder are multiple helper tabs to facilitate quickly building your desired expressions. The first one is the Function Helper which lists the available functions. The functions are listed by category: string, numeric, date, aggregate, and conditional. At the top of the list user has search, filter and sort options available to quickly find a function of interest. Hovering over a function in the list will bring up details of the function, from top to bottom: a summary of the format and input and output data types of the function, a description of what the function does, its input parameter(s), what it returns, and an example:

The next helper tab contains the Field Helper. This lists all the columns of the target table, sorted by their data type. Again, to quickly find the desired field, users can search, filter, and sort the list using the options at the top of the list:

The fourth tab is the Operator Helper, which lists several helpful numerical and string operators. This list can be searched too using the Search box at the top of the list:

There is another optional configuration section for Update tasks, the Condition section. In here, users can specify an expression to filter the target table on before applying the update(s) specified in the Update Statements section. This way, the updates are only applied to the subset of records that match the condition.
In this example, we will look at some records of the rawshipments table in the project sandbox of the same project (“Import Historical Shipments). We have opened this table in a grid and filtered for origin_dc Salt Lake City DC and destination_store CZ103.

What we want to do is update the “units” column and increase the values by 50% for the Table product. The Update Statements section shows that we set the units field to its current value multiplied by 1.5, which will achieve the 50% increase:

However, if we run the Update task as is, all values in the units field will be increased by 50%, for both the Table and the Chair product. To make sure we only apply this increase to the Table product, we configure the Condition section as follows:

The condition builder has the same function, field, and operator helper tabs at the bottom as the expression builder in the update statements section to enable users to quickly build their conditions. Building conditions works in the same way as building expressions.
Running the task and checking the updated rawshipments table for the same subset of records as we saw above, we can check that it worked as intended. The values in the units column for the Table records are indeed 1.5 times their original value, while the Chair units are unchanged.

It is important to note that opening tables in DataStar currently shows a preview of 10,000 records. When filtering a table by clicking on the filter icons to the right of a column name, only the resulting subset of records from those first 10,000 records will be included. While an Update task will be applied to all records in a table, due to this limit on the number of records in the preview you may not always be able to see (all) results of your Update task in the grid. In addition, an Update task can also change the order of the records in the table. This can lead to a filter showing a different set of records after running an update task as compared to the filtered subset that was shown prior to running it. Users can use the SQL Editor application on the Optilogic platform to see the full set of records for any tables.
Finally, if you want to apply multiple conditions you can use logical AND and OR statements to combine them in the Expression Builder. You would for example specify the condition as follows if you want to increase the units for the Table product by 50% only for the records where the origin_dc value is either “Dallas DC” or “Detroit DC”:

In this quick start guide we will show how users can seamlessly go from using the Resource Library, Cosmic Frog and DataStar applications on the Optilogic platform to creating visualizations in Power BI. The example covers cost to serve analysis using a global sourcing model. We will run 2 scenarios in this Cosmic Frog model with the goal to visualize the total cost difference between the scenarios by customer on a map. We do this by coloring the customers based on the cost difference.
The steps we will walk through are:
We will first copy the model named “Global Sourcing – Cost to Serve” from the Resource Library to our Optilogic account (learn more about the Resource Library in this help center article):

On the Optilogic platform, go to the Resource Library application by clicking on its icon in the list of applications on the left-hand side; note that you may need to scroll down. Should you not see the Resource Library icon here, then click on the icon with 3 horizontal dots which will then show all applications that were previously hidden too.
Now that the model is in the user’s account, it can be opened in the Cosmic Frog application:


We will only have a brief look at some high-level outputs in Cosmic Frog in this quick start guide, but feel free to explore additional outputs. You can learn more about Cosmic Frog through these help center articles. Let us have a quick look at the Optimization Network Summary output table and the map:


Our next step is to import the needed input table and output table of the Global Sourcing – Cost to Serve model into DataStar. Open the DataStar application on the Optilogic platform by clicking on its icon in the applications list on the left-hand side. In DataStar, we first create a new project named “Cost to Serve Analysis” and set up a data connection to the Global Sourcing – Cost to Serve model, which we will call “Global Sourcing C2S CF Model”. See the Creating Projects & Data Connections section in the Getting Started with DataStar help center article on how to create projects and data connections. Then, we want to create a macro which will calculate the increase/decrease in total cost by customer between the 2 scenarios. We build this macro as follows:

The configuration of the first import task, C2S Path Summary, is shown in this screenshot:

The configuration of the other import task, Customers, uses the same Source Data Connection, but instead of the optimizationcosttoservepathsummary table, we choose the customers table as the table to import. Again, the Project Sandbox is the Destination Data Connection, and the new table is simply called customers.
Instead of writing SQL queries ourselves to pivot the data in the cost to serve path summary table to create a new table where for each customer there is a row which has the customer name and the total cost for each scenario, we can use Leapfrog to do it for us. See the Leapfrog section in the Getting Started with DataStar help center article and this quick start guide on using natural language to create DataStar tasks to learn more about using Leapfrog in DataStar effectively. For the Pivot Total Cost by Scenario by Customer task, the 2 Leapfrog prompts that were used to create the task are shown in the following screenshot:

The SQL Script reads:
DROP TABLE IF EXISTS total_cost_by_customer_combined;
CREATE TABLE total_cost_by_customer_combined AS
SELECT
pathdestination AS customer,
SUM(CASE WHEN scenarioname = 'Baseline' THEN pathcost ELSE 0 END)
AS total_cost_baseline,
SUM(CASE WHEN scenarioname = 'OpenPotentialFacilities' THEN pathcost ELSE 0 END)
AS total_cost_openpotentialfacilities
FROM c2s_path_summary
WHERE scenarioname IN ('Baseline', 'OpenPotentialFacilities')
GROUP BY pathdestination
ORDER BY pathdestination;
To create the Calculate Cost Savings by Customer task, we gave Leapfrog the following prompt: “Use the total cost by customer table and add a column to calculate cost savings as the baseline cost minus the openpotentalfacilities cost”. The resulting SQL Script reads as follows:
ALTER TABLE total_cost_by_customer_combined
ADD COLUMN cost_savings DOUBLE PRECISION;
UPDATE total_cost_by_customer_combined
SET
cost_savings = total_cost_baseline - total_cost_openpotentialfacilities;
This task is also added to the macro; its name is "Calculate Cost Savings by Customer".
Lastly, we give Leapfrog the following prompt to join the table with cost savings (total_cost_by_customer_combined) and the customers table to add the coordinates from the customers table to the cost savings table: “Join the customers and total_cost_by_customer_combined tables on customer and add the latitude and longitude columns from the customers table to the total_cost_by_customer_combined table. Use an inner join and do not create a new table, add the columns to the existing total_cost_by_customer_combined table”. This is the resulting SQL Script, which was added to the macro as the "Add Coordinates to Cost Savings" task:
ALTER TABLE total_cost_by_customer_combined ADD COLUMN latitude VARCHAR;
ALTER TABLE total_cost_by_customer_combined ADD COLUMN longitude VARCHAR;
UPDATE total_cost_by_customer_combined SET latitude = c.latitude
FROM customers AS c
WHERE total_cost_by_customer_combined.customer = c.customername;
UPDATE total_cost_by_customer_combined SET longitude = c.longitude
FROM customers AS c
WHERE total_cost_by_customer_combined.customer = c.customername;We can now run the macro, and once it is completed, we take a look at the tables present in the Project Sandbox:

We will use Microsoft Power BI to visualize the change in total cost between the 2 scenarios by customer on a map. To do so, we first need to set up a connection to the DataStar project sandbox from within Power BI. Please follow the steps in the “Connecting to Optilogic with Microsoft Power BI” help center article to create this connection. Here we will just show the step to get the connection information for the DataStar Project Sandbox, which underneath is a PostgreSQL database (next screenshot) and selecting the table(s) to use in Power BI on the Navigator screen (screenshot after this one):

After selecting the connection within Power BI and providing the credentials again, on the Navigator screen, choose to use just the total_cost_by_customer_combined table as this one has all the information needed for the visualization:

We will set up the visualization on a map using the total_cost_by_customer_combined table that we have just selected for use in Power BI using the following steps:
With the above configuration, the map will look as follows:

Green customers are those where the total cost went down in the OpenPotentialFacilities scenario, i.e. there are savings for this customer. The darker the green, the higher the savings. White customers did not see a lot of difference in their total costs between the 2 scenarios. The one that is hovered over, in Marysville in Washington state, has a small increase of $149.71 in total costs in the OpenPotentialFacilities scenario as compared to the Baseline scenario. Red customers are those where the total cost went up in the OpenPotentialFacilities scenario (i.e. the cost savings are a negative number); the darker the red, the higher the increase in total costs. As expected, the customers with the highest cost savings (darkest green) are those located in Texas and Florida, as they are now being served from DCs closer to them.
To give users an idea of what type of visualization and interactivity is possible within Power BI, we will briefly cover the 2 following screenshots. These are of a different Cosmic Frog model for which a cost to serve analysis is performed too. Two scenarios were run in this model: Baseline DC and Blue Sky DC. In the Baseline scenario, customers are assigned to their current DCs and in the Blue Sky scenario, they can be re-assigned to other DCs. The chart on the top left shows the cost savings by region (= US state) that are identified in the Blue Sky DC scenario. The other visualizations on the dashboard are all on maps: the top right map shows the customers which are colored based on which DC serves them in the Baseline scenario, the bottom 2 maps shows the DCs used in the Baseline (left) and the DCs used in the Blue Sky scenario.

To drill into the differences between the 2 scenarios, users can expand the regions in the top left chart and select 1 or multiple individual customers. This is an interactive chart, and the 3 maps are then automatically filtered for the selected location(s). In the below screenshot, the user has expanded the NC region and then selected customer CZ_593_NC in the top left chart. In this chart, we see that the cost savings for this customer in the Blue Sky DC scenario as compared to the Baseline scenario amount to $309k. From the Customers map (top right) and Baseline DC map (bottom left) we see that this customer was served from the Chicago DC in the Baseline. We can tell from the Blue Sky DC map (bottom right) that this customer is re-assigned to be served from the Philadelphia DC in the Blue Sky DC scenario.

Since Leapfrog's creation, the system has continuously evolved with the addition of new specialized agents. The platform now features a comprehensive agent library built on a robust toolkit framework, enabling sophisticated multi-agent workflows and autonomous task execution.
AI agents are software systems that use a large language model (LLM) as a reasoning engine but go beyond chat by taking actions in an environment. Instead of only generating text, an agent can interpret a goal, decide what to do next, call external capabilities (tools), observe the results, and iterate until the objective is achieved.
In practice, an "agent" is not a single model call - it is a control system wrapped around an LLM:
This architecture matters because it turns the LLM from a passive text generator into an adaptive problem-solver that can:
An agent is not just a chat model. A chat model produces responses; an agent operates - it can run commands, fetch data, write artifacts, and iterate autonomously within defined constraints. Think of an AI agent as a smart assistant that can:
Agents are most useful when tasks are multi-step, partially specified, and feedback-driven, for example:
If a task is single-shot and fully specified (e.g., "summarize this paragraph"), a non-agent LLM call is often simpler and cheaper.
Most agents follow a ReAct-style loop (Reason + Act), sometimes with explicit planning:
A useful way to think about the loop is that each iteration should:
Well-behaved agents stop for explicit reasons, such as:
An agent is the intelligent layer that decides what to do. It's like a project manager who understands the goal, plans the approach, and uses available skills and tools to get the job done.

The Leapfrog ecosystem includes many specialized agents (some of which are shown in the image above), each designed for specific analytical and reporting tasks.
Why specialization helps:
A common pattern is an orchestrator (or "manager") agent that routes work to sub-agents and integrates their outputs into a final deliverable.
The agent toolkit is built on four foundational concepts that enable flexible and powerful agent development:
The core reasoning component - a large language model equipped with specialized skills and capabilities.
In addition to the model itself, an agent definition typically includes:
A versatile building block that packages how to do something. This modularity allows agents to be composed and extended dynamically.
A skill may:
A mechanism for injecting domain-specific expertise into agents at runtime, enabling them to operate effectively in specialized fields without requiring model retraining.
An intelligent storage system that helps agents overcome context-management challenges by preserving important information for future use, enabling continuity across interactions.
Current implementation supports several advanced capabilities enabled by the agent toolkit:
Agents can build structured plans that improve the accuracy and quality of final outputs through systematic decomposition of complex tasks.
The system supports custom tools provided by users, allowing agents to integrate with existing workflows and data infrastructure.
Persistent memory enables agents to maintain context and track important information across extended work sessions.
Complex tasks can be delegated to specialized sub-agents, allowing for efficient division of labor and expertise application.
The system intelligently manages context to ensure agents have access to relevant information while avoiding context window limitations.
Below is a simple workflow showing how different components work together. For simplicity, not all components are included here.

An agent is the intelligent layer that decides what to do. It's like a project manager who understands the goal, plans the approach, and uses available skills and tools to get the job done.
Skills are packaged capabilities that combine one or more tools with guidance on when and how to use them. Think of a skill as a trained procedure or technique.
Tools are the specific actions an AI agent can perform. They are specialized and do one specific thing reliably. They don't make decisions - they just execute when called.
As an AI Agent works, it produces the logs which include steps that the agent takes, tools it calls, as well as a work summary. The AI Response sections are typically the most useful as they explain the exploration plan, the work it has done, and the results after exploration. This is generally a response to the user. While all others are more for internal processes.


The Model Output Insights Agent helps users investigate and analyze Cosmic Frog model outputs by turning analytical questions into structured, data-backed strategic reports. It breaks down complex questions into a step-by-step exploration plan, executes targeted queries, synthesizes findings, and produces a professional report - complete with visualizations and actionable recommendations.

This documentation describes how this specific agent works and can be configured, including walking through an example. Please see the “AI Agents: Architecture and Components” Help Center article if you are interested in understanding how the Optilogic AI Agents work at a detailed level.
Extracting meaningful insights from large databases typically requires exploring and analyzing many output tables which can take a lot of time and effort. The Model Output Insights Agent streamlines the process, helping users get to the insights quicker than ever before.
Main skills the Model Output Insights Agent uses:

Supporting capabilities:

The agent can be accessed through the Run Utility task in DataStar. The key inputs are:

Optionally, users can configure the following Run Utility task inputs:

After the run, a report in markdown format (.md) and possible charts are created and can be found in the Explorer with the specified file name and folder. Once clicked, the file is opened in the Lightning Editor application for review.
Note that currently the charts are only included in the markdown file as a file name. Users can look for the charts in the Charts folder in the targeted report directory:

The Run Utility task also offers the ability for users to set Run Configuration options. This is optional.



This example uses the Global Supply Chain Strategy model from the Resource Library to get insights on Baseline vs. No Detroit DC scenario comparison where cost, flow shifts and service impacts are explored.


Cosmic Frog Model Name: Global Supply Chain Strategy
Analysis Question: Compare cost and flow from Baseline and No Detroit DC scenarios. I'm interested in knowing the cost bucket that drives total savings. I want to know where the flow from Detroit DC was redirected to. Lastly, compare weighted average service distance - i.e. do customers have shorter/longer/the same service distance when Detroit closes down. Who are the top 5 customers with highest service impact?
Knowledge: Info on target audience for the report, expected report length and tone:

Should you wish to read the entire report instructions file and/or use it as a starting point for your own usage with this Agent, you can download it here. After downloading, please rename the .txt extension to .md. You can then upload it to your Optilogic account using the Explorer application and then view it in the Lightning Editor application.
Outputs: The report as a markdown file and a chart in the Charts folder:

The Data Cleansing Agent is an AI-powered assistant that helps users profile, clean, and standardize their database data without writing code. Users describe what they want in plain English -- such as "find and fix postal code issues in the customers table" or "standardize date formats in the orders table to ISO" -- and the agent autonomously discovers issues, creates safe working copies of the data, applies the appropriate fixes, and verifies the results. It handles common supply chain data problems including mixed date formats, inconsistent country codes, Excel-corrupted postal codes, missing values, outliers, and messy text fields. It expects a connected database with one or more tables as input. The output is a set of cleaned copies of their tables in the database which users can immediately use for Cosmic Frog model building, reporting, or further analysis, while the original data is preserved untouched for comparison or rollback.
This documentation describes how this specific agent works and can be configured, including walking through multiple examples. Please see the “AI Agents: Architecture and Components” Help Center article if you are interested in understanding how the Optilogic AI Agents work at a detailed level.
Cleaning and standardizing data for supply chain modeling typically requires significant manual effort -- writing SQL queries, inspecting column values, fixing formatting issues one at a time, and verifying results. The Data Cleansing Agent streamlines this process by turning a single natural language prompt into a full profiling, cleaning, and verification workflow.
Key Capabilities:
Skills:

The agent can be accessed through the Run Utility task in DataStar, see also the screenshots below. The key inputs are:
The Task Description field includes placeholder examples to help you get started:
Optionally, users can:



Suggested workflow:
After the run, the agent produces a structured summary of everything it did, including metrics on rows affected, issues found, and issues fixed; see the next section where this Job Log is described in more detail. The cleaned data is persisted as clean_* tables in the database (e.g., clean_customers, clean_shipments).
After a run completes, the Job Log in Run Manager provides a detailed trace of every step the agent took. Understanding the log structure helps users verify what happened and troubleshoot if needed. The log follows a consistent structure from start to finish.

Header
Every log begins with a banner showing the database name and the exact prompt that was submitted.

Connection & Setup
The agent validates the database connection and initializes itself with its full set of tools. If Verbose Output is set to "Detailed", the log also prints the system prompt and tool list at this stage.

Planning Phase
For non-trivial tasks, the agent creates a strategic execution plan before taking action. This appears as a PlanningSkill tool call, followed by an AI Response box containing a structured plan with numbered steps, an objective, approach, and skill mapping. The plan gives users visibility into the agent's intended approach before it begins working.

Tool Calls and Thinking
The bulk of the log shows the agent calling its specialized tools one at a time. Each tool call appears in a bordered box showing the tool name. Between tool calls, the agent's reasoning is shown in Thinking boxes -- explaining what it learned from the previous tool, what it plans to do next, and why. These thinking sections are among the most useful parts of the log for understanding the agent's decision-making.

The agent may call many tools in sequence depending on the complexity of the task. Profiling-only prompts typically involve discovery tools (schema, missing data, date issues, location issues, outliers). Cleanup prompts add transformation tools (ensure_clean_table, standardize_country_codes, standardize_date_column, etc.).
Occasionally a Memory Action Applied entry appears between steps -- this is the agent recording context for its own use and can be ignored.
Error Recovery
If the agent encounters a validation error on a tool call (e.g., a column stored as TEXT when a numeric type was expected, or a missing parameter), the log shows the error and the agent's automatic adjustment. The agent reasons about the failure in a Thinking block and retries with corrected parameters. Users do not need to intervene.
Agent Response
At the end of the run, the agent produces a structured summary of everything it discovered or changed. This is the most important section of the log for understanding outcomes:

For profiling prompts, this section reports what was found across all tables -- schema details, missing data percentages, date format inconsistencies, location quality issues, numeric anomalies, and recommendations for next steps. For cleanup prompts, it reports which tables were modified, what transformations were applied, how many rows were affected, and confirmation that originals are preserved.
Execution Summary
The log ends with runtime statistics and the full list of skills that were available to the agent:


What the agent expects in your database:
The agent works with any tables in the selected database. There are no fixed column name requirements -- the agent discovers the schema automatically. However, for best results:

Tips & Notes
A user wants to understand what data is in their database before deciding what to clean.
Database: Supply Chain Dataset
Task Description: List all tables in the database and show their schemas
What happens: The agent calls get_database_schema for all tables and exits with a structured report.
Output:
Requested: List all tables and show schemas.
Discovered (schema 'starburst'):
...
Total: 12 tables, 405 rows, 112 columns
A user needs to clean up customer location data before using it in a Cosmic Frog network optimization model.
Database: Supply Chain Dataset
Task Description: Clean the customers table completely: standardize dates to ISO, fix postal codes (Excel corruption + placeholders), standardize country codes to alpha-2, clean city names, and normalize emails to lowercase
What the agent does:
Output:
Completed data cleansing of clean_customers table:
All changes applied to clean_customers (original customers table preserved).
The cleaned data is available in the clean_customers table in the database. The original customers table remains untouched.
A user with a 14-table enterprise supply chain database needs to clean and standardize all data before building Cosmic Frog models for network optimization and simulation.
Database: Enterprise Supply Chain
Task Description: Perform a complete data cleanup across all tables: standardize all dates to ISO, standardize all country codes to alpha-2, clean all city names, fix all postal codes, and normalize all email addresses to lowercase. Work systematically through each table.
What the agent does: The agent works systematically through all tables -- standardizing dates across 12+ tables, fixing country codes, cleaning city names, repairing postal codes, normalizing emails and status fields, detecting and handling negative values, converting mixed units to metric, validating calculated fields like order totals, and reporting any remaining referential integrity issues. This is the most comprehensive operation the agent can perform.
Output: A detailed summary covering every table touched, every transformation applied, and a final quality scorecard showing the before/after improvement.
Below are example prompts users can try, organized by category.
The Full Truckload Costing utility solves the common problem of missing transportation cost data when building supply chain models. Rather than requiring users to manually research rates for every lane, this workflow automatically derives costs from a company's existing shipment history. The utility expects two input tables: a lanes-to-cost table containing the origin-destination pairs that need pricing, and an optional historical shipments table containing preprocessed cost data. After running the utility, users receive a fully costed lanes table with confidence levels for each estimate.
Sample Data
System Utility
The steps to use this utility are as follows. These are illustrated with screenshots below.
Screenshots of the steps:









Key Constraints:

Key Constraints:
The utility produces an output table containing all lanes from the input with the following additional columns populated:

The utility processes lanes through a sequential pipeline, with each step only processing lanes that still have NULL costs:
The Less Than Truckload Costing utility solves the challenge of pricing less-than-truckload shipments when carrier rate data is complex and varies by service level, distance, and weight. Rather than manually looking up rates in carrier tariff tables, this workflow automates the entire process using FedEx Express Freight standard list rates. The utility expects a lanes-to-cost table containing shipment details including origin, destination, distance, weight, and desired service level. After running the utility, users receive a fully costed table with calculated transportation costs.
Sample Data
System Utility
The steps to use this utility are as follows. These are illustrated with screenshots below.
Screenshots of the steps:







Key Constraints:
The utility produces an output table containing all lanes from the input with the following columns populated:

Zones are determined automatically based on the following priority:
Special Zones (for Alaska/Hawaii):
Standard Distance-Based Zones:

Costs are calculated using the following formula:
base_charge = shipment_weight x price_per_lb final_cost = MAX(base_charge, minimum_charge) Effective Weight: If the shipment weight is below the minimum weight for a service/zone combination, the utility uses the minimum weight band's rate but calculates the charge based on the actual shipment weight.

Named Filters are an exciting new feature which allows users to create and save specific filters directly on grid views, to then be utilized seamlessly across all policies tables, scenario items and map layers. For example, if you create a filter named “DCs” in the Facilities table to capture all entries with “DC” in their designation, this Named Filter can then be applied in a policy table, providing a dynamic alternative to the traditional Group function.
Unlike Groups, named filters automatically update: adding or removing a DC record in the Facilities table will instantly reflect in the Named Filter, streamlining the workflow and eliminating the need for manual updates. Additionally, when creating Scenario Items or defining Map Layers, users can easily select Named Filters to represent specific conditions, easily previewing the data, making the process much quicker and simpler.
In this help article, how Named Filters are created will be covered first. In the sections after, we will discuss how Named Filters can be used on input tables, in scenario items, and on map layers, while the final section contains a few notes on deleting Named Filters.
Named Filters can be set up and saved on any Cosmic Frog table: input tables, output tables, and custom tables. These tables are found in the Data module of a Cosmic Frog model:


A quick description of each of the options available in the Filter drop-down menu follows here, we will cover most of these in more detail in the remainder of this Help Article:
Note that an additional Save Filter option becomes available in this menu in case a filter has been created (added) and next changes have been made to the table's filter conditions. The Save Filter option can then be used to update the existing named filter to reflect these changes.
Let’s walk through setting up a filter on the Facilities table that filters out records where the Facility Name ends in “DC” and save it as a named filter called “DCs”:



There are 3 buttons below the list of filters as follows (these were obscured by the hover text in the previous screenshot):

There is a right-click context menu available for filters listed in the Named Filters pane, which allows the user to perform some of the same actions as those in the main Filter menu shown above:

Named Filters can use filtering conditions that are applied to multiple fields in a table. The next example shows a Named Filter called “CZ2* Space Suit demand >6k” on the Customer Demand input table which uses filtering conditions on three fields:

Conditions were applied to 3 fields in the Customer Demand table, as follows: 1) Customer Name Begins With “CZ2”, 2) Product Name Contains “Space”, and 3) Quantity Greater Than “6000”. The resulting filter was saved as a Named Filter with the name “CZ2* Space Suit demand >6k” which is applied in the screenshot above. When hovering over this Named Filter, we indeed see the 3 fields and that they each have a single condition on them.
Besides being able to create Named Filters on input tables, they can also be created on output and custom tables. On output tables this can for example expedite the review of results after running additional scenarios where one can apply a pre-saved set of Named Filters one after the other once the runs are done instead of having to re-type each filter that shows the outputs of interest each time. This example shows a Named Filter on the Optimization Facility Summary output table to show records where the Throughput Utilization is greater than 0.8:

Next, we will see how multiple Named Filters can be applied to a table. In the example we will use, there are 4 Named Filters set up on the Facilities table:

Next, we will apply another Named Filter in addition to this first one ("USA locations"). How the Named Filters work together depends on if they are filtering on the same field or on different fields:
Now, if we want to filter out only Ports located in the USA, we can apply 2 of the Named Filters simultaneously:

To show an example of how multiple named filters that filter on the same field work, we will add a third Named Filter:

To alter an existing filter, we can change the criteria of this existing filter, and then save the resulting filter, replacing the original Named Filter. Let’s illustrate this through an example: in a model with about 1.3k customers in the US, we have created a Named Filter “New York and New Jersey”, but later on realize that this filter also includes customers in New Hampshire and New Mexico:

In reality, this filter also filters out customers located in the regions (states) of New Hampshire and New Mexico in addition to those in New York and New Jersey. So, the next step is to update the filter to only filter out the New York and New Jersey customers:

Next, we can use the Save Filter option from either the Filter menu drop-down list or the context menu after right-clicking on the filter in the Named Filters pane to update the existing "New York and New Jersey" named filter to use the updated condition. The following screenshot shows the latter method:

After choosing Save Filter from the context menu, the following message is shown for the user to confirm they want to overwrite the original named filter using the current filter conditions:

After clicking Save, the existing named filter has been updated:

So far, the only examples were of filters applied to one field in an input table. The next example shows a Named Filter called “CZ2* Space Suit demand >6k” on the Customer Demand input table which uses filtering conditions on multiple fields:

Conditions were applied to 3 fields in the Customer Demand table, as follows: 1) Customer Name Begins With “CZ2”, 2) Product Name Contains “Space”, and 3) Quantity Greater Than “6000”. The resulting filter was saved as a Named Filter with the name “CZ2* Space Suit demand >6k” which is applied in the screenshot above. When hovering over this Named Filter, we indeed see the 3 fields and that they each have a single condition on them.
Besides being able to create Named Filters on input tables, they can also be created on output and custom tables. On output tables this can for example expedite the review of results after running additional scenarios where one can apply a pre-saved set of Named Filters one after the other once the runs are done instead of having to re-type each filter that shows the outputs of interest each time. This example shows a Named Filter on the Optimization Facility Summary output table to show records where the Throughput Utilization is greater than 0.8:

The last option of Show Input Data Errors in the Filter menu creates a special filter named ERRORS and filters out records in the input table it is used on that have errors in the input data. This can be very helpful as records with input errors may have these in different fields and the types of errors may be different, so a user is not able to create 1 single filter that would capture multiple different types of errors. When this filter is applied, any record that has 1 or multiple fields with a red outline will be filtered out and shown. Hovering over the field gives a short description of the problem with the value in the field.

Named Filters for certain model elements (i.e. Customers, Facilities, Suppliers, Products, Periods, Modes, Shipments, Transportation Assets, Processes, Bills Of Materials, Work Centers, and Work Resources) can be used in other input tables, very similar to how Groups work in Cosmic Frog: instead of setting up multiple records for individual elements, for example a transportation policy from A to B for each finished good, a Named Filter that filters out all finished goods on the Products table can be used to set up 1 transportation policy for these finished goods from A to B (which at run-time will be expanded into a policy for each finished good). The advantage of using Named Filters instead of Groups is that Named Filters are dynamic. If records are added to tables containing model elements and they match the conditions of any Named Filters, they are automatically added to those Named Filters. Think for example of Products with the pre-fix FG_ to indicate they are finished goods and a Named Filter “Finished Goods” that filters the Product Name on Begins With “FG_”. If a new product is added where the Product Name starts with FG_, it is automatically added to the Finished Goods Named Filter and anywhere this filter is used this new finished good is now included too. We will look at 2 examples in the next few screenshots.

The completed transportation policy record uses Named Filters for the Origin Name, Destination Name, and Product Name, making this record flexible as long as the naming conventions of the factories, ports, and raw materials keep following the same rules.

The next example is on one of the Constraints tables, Production Constraints. On the Constraints tables, the Group Behavior fields dictate how an element name that is a Group or a Named Filter should be used. When set to Enumerate, the constraint is applied to each individual member of the group or named filter. If it is set to Aggregate, the constraint applies to all members of the group or named filter together. This Production Constraint states that at each factory a maximum amount of 150,000 units over all finished goods together can be produced:

When setting up a scenario item, previously, the records that the scenario item’s change needed to be applied to could be set by using the Condition Builder. Now users have the added option to use a saved Named Filter instead, which makes it easier as the user does not need to know the syntax for building a condition, and it also makes it more flexible as Named Filters are dynamic as was discussed in the previous section. In addition, users can preview the records that the change will be made to so the chance of mistakes is reduced.
Please note that a maximum of 1 Named Filter can be used on a scenario item.
The following example changes the Suppliers table of a model which has around 70 suppliers, about half of these are in Europe and the other half in China:


The Named Filters drop-down collapses after choosing the China Suppliers Named Filter as the condition, and now we see the Preview of the filtered grid. This is the Suppliers table with the Named Filter China Suppliers applied. At the right top of the grid the name of the applied Named Filter(s) is shown, and we can see that in the preview we indeed only see Suppliers for which the Country is China. So these are the records the change (setting Status = Exclude) will be made to in scenarios that use this scenario item.
A few notes on the Filter Grid Preview:
Besides using Named Filters on other input tables and for setting up conditions for scenario items, they can also be used as conditions for Map Layers, which will be covered in this final section of this Help Article. Like for scenario items, there is also a Filter Grid Preview for Map Layers to double-check which records will be filtered out when applying the condition(s) of 1 or multiple Named Filters.
In this first example, a Named Filter on the Facilities table filters out only the Facilities that are located in the USA:


Another example of the same model is to use a different Named Filter from the Facilities table to show only Factories on the map:

If the Factories and the Ports Named Filters had both been enabled, then all factories and ports would be showing on the map. So, like for scenario items, applying multiple Named Filters to a Map Layer is additive (acts like OR statements).
The same notes that were listed for the Filter Grid Preview for scenario items apply to the Filter Grid Preview for Map Layers too: columns with conditions have the filter icon on them, users can resize and (multi-)sort the columns, however, re-ordering the columns is not possible.
Named Filters can be deleted, and this affects other input tables, scenario items, and map layers that used the now deleted Named Filter(s). This will be explained further in this final section of the Help Article on Named Filters.
A Named Filter can be deleted by using one of three methods:
After choosing to delete a named filter, the following message comes up to ask the user for confirmation. In this example we are deleting the filter named "New York and New Jersey" which is a filter on the Customers input table:

The message will let the user know if the named filter that is about to be deleted was used in any Map Layers and/or Scenario Items. If so, it lists the names of these layers/items in the "See where used" section which can be expanded and collapsed by clicking on the caret symbol. Note that currently this message does not indicate if the named filter is used in any input tables.
The results of deleting a Named Filter that was used are as follows:
Optilogic has developed Python libraries to facilitate scripting for 2 of its flagship applications: Cosmic Frog, the most powerful supply chain design tool on the market, and DataStar, its just released AI-powered data product where users can create flexible, accessible and repeatable workflows with zero learning curve.
Instead of going into the applications themselves to build and run supply chain models and data workflows, these libraries enable users to programmatically access their functionality and underlying data. Example use cases for such scripts are:
In this documentation we cover the basics of getting yourself set up so you can take advantage of these Python scripting libraries, both on a local computer and on the Optilogic platform leveraging the Lightning Editor application. More specific details for the cosmicfrog and datastar libraries, including examples and end-to-end scripts, are detailed in the following Help Center articles and library specifications:
Working locally with Python scripts has the advantage that you can make use of code completion features which may include text auto-completion, showing what arguments functions need, catching incorrect syntax/names, etc. An example set up to achieve this is for example one where Python, Visual Studio Code, and an IntelliSense extension package for Python for Visual Studio Code are installed locally:
Once you are set up locally and are starting to work with Python scripts in Visual Studio Code, you will need to install the Python libraries you want to use to have access to their functionality. You do this by typing following in a terminal in Visual Studio Code (if no terminal is open yet: click on the View menu at the top and select Terminal, or the keyboard shortcut Ctrl + ` can be used):

When installing these libraries, multiple external libraries (dependencies) are installed too. These are required to run the packages successfully and/or make working with them easier. These include the optilogic, pandas, and SQLAlchemy packages (among others) for both libraries. You can find out which packages are installed with the cosmicfrog / ol-datastar libraries by typing “pip show cosmicfrog” or “pip show ol-datastar" in a terminal.
To use other Python libraries in addition, you will usually need to install them using “pip install” too before you can leverage them.
If you want to access certain items on the Optilogic platform (like Cosmic Frog models, DataStar project sandboxes) while working locally, you will need to whitelist your IP address on the platform, so the connections are not blocked by a firewall. You can do this yourself on the Optilogic platform:

Please note that for working with DataStar, the whitelisting of your IP address is only necessary if you want to access the Project Sandbox of projects directly through scripts. You do not need to whitelist your IP address to leverage other functions while scripting, like creating projects, adding macros and their tasks, and running macros.
App Keys are used to authenticate the user from the local environment on the Optilogic platform. To create an App Key, see this Help Center Article on Generating App and API Keys. Copy the generated App Key and paste it into an empty Notepad window. Save this file as app.key and place it in the same folder as your local Python script.
It is important to emphasize that App Keys and app.key files should not be shared with others, e.g. remove them from folders / zip-files before sharing. Individual users need to authenticate with their own App Key.
The next set of screenshots will show an example Python script named testing123.py on our local set-up. Here it uses the cosmicfrog library, using the ol-datastar library works similarly. The first screenshot shows a list of functions available from the cosmicfrog Python library:

When you continue typing after you have typed “model.” the code completion feature will auto-generate a list of functions you may be getting at. In the next screenshot ones that start with or contain a “g” as I have only typed a “g” so far. This list will auto-update the more you type. You can select from the list with your cursor or arrow up/down keys and hitting the Tab key to select and auto-complete:

When you have completed typing the function name and next type a parenthesis ‘(‘ to start entering arguments, a pop-up will come up which contains information about the function and its arguments:

As you type the arguments for the function, the argument that you are on and the expected format (e.g. bool for a Boolean, str for string, etc.) will be in blue font and a description of this specific argument appears above the function description (e.g. above box 1 in the above screenshot). In the screenshot above we are on the first argument input_only which requires a Boolean as input and will be set to False by default if the argument is not specified. In the screenshot below we are on the fourth argument (original_names) which is now in blue font; its default is also False, and the argument description above the function description has changed now to reflect the fourth argument:

Once you are ready to run a script, you can click on the play button at the top right of the screen:

As mentioned above, you can also use the Lightning Editor application on the Optilogic platform to create and run Python scripts. Lightning Editor is an Integrated Development Environment (IDE) which has some code completion features, but these are not as extensive and complete as those in Visual Studio Code when used with an IntelliSense extension package.
When working on the Optilogic platform, you are already authenticated as a user, and you do not need to generate / provide an App Key or app.key file nor whitelist your IP address.
When using the datastar library in scripts, users need to place a requirements.txt file in the same folder on the Optilogic platform as the script. This file should only contain the text “ol-datastar” (without the quotes). No requirements.txt files is required when using the cosmicfrog library.
The following simple test.py Python script on Lightning Editor will print the first Hopper output table name and its column names:



DataStar users can take advantage of the datastar Python library, which gives users access to DataStar projects, macros, tasks, and connections through Python scripts. This way users can build, access, and run their DataStar workflows programmatically. The library can be used in a user’s own Python environment (local or on the Optilogic platform), and it can also be used in Run Python tasks in a DataStar macro.
In this documentation we will cover how to use the library through multiple examples. At the end, we will step through an end-to-end script that creates a new project, adds a macro to the project, and creates multiple tasks that are added to the macro. The script then runs the macro while giving regular updates on its progress.
Before diving into the details of this article, it is recommended to read this “Setup: Python Scripts for Cosmic Frog and DataStar” article first; it explains what users need to do in terms of setup before they can run Python scripts using the datastar library. To learn more about the DataStar application itself, please see these articles on Optilogic’s Help Center.
Succinct documentation in PDF format of all datastar library functionality can be downloaded here (please note that the long character string at the beginning of the filename is expected). This includes a list of all available properties and methods for the Project, Macro, Task, and Connection classes at the end of the document.
All Python code that is shown in the screenshots throughout this documentation is available in the Appendix, so that you can copy-paste from there if you want to run the exact same code in your own Python environment and/or use these as jumping off points for your own scripts.
If you have reviewed the “Setup: Python Scripts for Cosmic Frog and DataStar” article and are set up with your local or online Python environment, we are ready to dive in! First, we will see how we can interrogate existing projects and macros using Python and the datastar library. We want to find out which DataStar projects are already present in the user’s Optilogic account.


Once the parentheses are typed, hover text comes up with information about this function. It tells us that the outcome of this method will be a list of strings, and the description of the method reads “Retrieve all project names visible to the authenticated user”. Most methods will have similar hover text describing the method, the arguments it takes and their default values, and the output format.
Now that we have a variable that contains the list of DataStar projects in the user account, we want to view the value of this variable:

Next, we want to dig one level deeper and for the “Import Historical Shipments” project find out what macros it contains:

Finally, we will retrieve the tasks this “Import Shipments” macro contains in a similar fashion:

In addition, we can have a quick look in the DataStar application to see that the information we are getting from the small scripts above matches what we have in our account in terms of projects (first screenshot below), and the “Import Shipments” macro plus its tasks in the “Import Historical Shipments” project (second screenshot below):


Besides getting information about projects and macros, other useful methods for projects and macros include:
Note that when creating new objects (projects, macros, tasks or connections) these are automatically saved. If existing objects are modified, their changes need to be committed by using the save method.
Macros can be copied, either within the same project or into a different project. Tasks can also be copied, either within the same macro, between macros in the same project, or between macros of different projects. If a task is copied within the same macro, its name will automatically be suffixed with (Copy).
As an example, we will consider a macro called “Cost Data” in a project named “Data Cleansing and Aggregation NA Model”, which is configured as follows:

The North America team shows this macro to their EMEA counterparts who realize that they could use part of this for their purposes, as their transportation cost data has the same format as that of the NA team. Instead of manually creating a new macro with new tasks that duplicate the 3 transportation cost related ones, they decide to use a script where first the whole macro is copied to a new project, and then the 4 tasks which are not relevant for the EMEA team are deleted:

After running the script, we see in DataStar that there is indeed a new project named “Data Cleansing and Aggregation EMEA” which has a “Cost Data EMEA” macro that contains the 3 transportation cost related tasks that we wanted to keep:

Note that another way we could have achieved this would have been to copy the 3 tasks from the macro in the NA project to the new macro in the EMEA project. The next example shows this for one task. Say that after the Cost Data EMEA macro was created, the team finds they also have a use for the “Import General Ledger” task that was deleted as it was not on the list of “tasks to keep”. In an extension of the previous script or a new one, we can leverage the add_task method of the Macro class to copy the “Import General Ledger” task from the NA project to the EMEA one:

After running the script, we see that the “Import General Ledger” task is now part of the “Cost Data EMEA” macro and is connected to the Start task:

Several additional helpful features on chaining tasks together in a macro are:
DataStar connections allow users to connect to different types of data sources, including CSV-files, Excel files, Cosmic Frog models, and Postgres databases. These data sources need to be present on the Optilogic platform (i.e. visible in the Explorer application). They can then be used as sources / destinations / targets for tasks within DataStar.
We can use scripts to create data connections:

After running this script, we see the connections have been created. In the following screenshot, the Explorer is on the left, and it shows the Cosmic Frog model “Global Supply Chain Strategy.frog” and the Shipments.csv file. The connections using these are listed in the Data Connections tab of DataStar. Since we did not specify any description, an auto-generated description “Created by the Optilogic Datastar library” was added to each of these 2 connections:

In addition to the connections shown above, data connections to Excel files (.xls and .xlsx) and PostgreSQL databases which are stored on the Optilogic platform can be created too. Use the ExcelConnection and OptiConnection classes to set up such these types of connections up.
Each DataStar project has its own internal data connection, the project sandbox. This is where users perform most of the data transformations after importing data into the sandbox. Using scripts, we can access and modify data in this sandbox directly instead of using tasks in macros to do so. Note that if you have a repeatable data workflow in DataStar which is used periodically to refresh a Cosmic Frog model where you update your data sources and re-run your macros, you need to be mindful of making one-off changes to the project sandbox through a script. When you change data in the sandbox through a script, macros and tasks are not updated to reflect these modifications. When running the data workflow the next time, the results may be different if that change the script made is not made again. If you want to include such changes in your macro, you can add a Run Python task to your macro within DataStar.
Our “Import Historical Shipments” project has a table named customers in its project sandbox:

To make the customers sort in numerical order of their customer number, our goal in the next script is to update the number part of the customer names with left padded 0’s so all numbers consist of 4 digits. And while we are at it, we are also going to replace the “CZ” prefix with a “Cust_” prefix.
First, we will show how to access data in the project sandbox:

Next, we will use functionality of the pandas Python library (installed as a dependency when installing the datastar library) to transform the customer names to our desired Cust_xxxx format:

As a last step, we can now write the updated customer names back into the customers table in the sandbox. Or, if we want to preserve the data in the sandbox, we can also write to a new table as is done in the next screenshot:

We use the write_table method to write the dataframe with the updated customer names into a new table called “new_customers” in the project sandbox. After running the script, opening this new table in DataStar shows us that the updates worked:

Finally, we will put everything we have covered above together in one script which will:
We will look at this script through the next set of screenshots. For those who would like to run this script themselves, and possibly use it as a starting point to modify into their own script:


Next, we will create 7 tasks to add to the “Populate 3 CF Model Tables” macro, starting with an Import task:

Similar to the “create_dc_task” Run SQL task, 2 more Run SQL tasks are created to create unique customers and aggregated customer demand from the raw_shipments table:

Now that we have generated the distribution_centers, customers, and customer_demand tables in the project sandbox using the 3 SQL Run tasks, we want to export these tables into their corresponding Cosmic Frog tables (facilities, customers, and customerdemand) in the empty Cosmic Frog model:

The following 2 Export tasks are created in a very similar way:


This completes the build of the macro and its tasks.
If we run it like this, the tasks will be chained in the correct way, but they will be displayed on top of each other on the Macro Canvas in DataStar. To arrange them nicely and prevent having to reposition them manually in the DataStar UI, we can use the “x” and “y” properties of tasks. Note that since we are now changing existing objects, we need to use the save method to commit the changes:

In the green outlined box, we see that the x-coordinate on the Macro Canvas for the import_shipments_task is set to 250 (line 147) and its y-coordinate to 150 (line 148). In line 149 we use the save method to persist these values.
Now we can kick off the macro run and monitor its progress:

While the macro is running, messages written to the terminal by the wait_for_done method will look similar to following:

We see 4 messages where the status was “processing” and then a final fifth one stating the macro run has completed. Other statuses one might see are pending when the macro has not yet started and errored in case the macro could not finish successfully.
Opening the DataStar application, we can check the project and CSV connection were created on the DataStar startpage. They are indeed there, and we can open the “Scripting with DataStar” project to check the “Populate 3 CF Model Tables” macro and the results of its run:

The macro contains the 7 tasks we expect and checking their configurations shows they are set up the way we intended to.
Next, we have a look at the Data Connections tab to see the results of running the macro:

Here follows the code of each of the above examples. You can copy and paste this into your own scripts and modify them to your needs. Note that whenever names and paths are used, you may need to update these to match your own environment.
Get list of DataStar projects in user's Optilogic account and print list to terminal:
from datastar import *
project_list = Project.get_projects()
print(project_list)
Connect to the project named "Import Historical Shipments" and get the list of macros within this project. Print this list to the terminal:
from datastar import *
project = Project.connect_to("Import Historical Shipments")
macro_list = project.get_macros()
print(macro_list)
In the same "Import Historical Shipments" project, get the macro named "Import Shipments", and get the list of tasks within this macro. Print the list with task names to the terminal:
from datastar import *
project = Project.connect_to("Import Historical Shipments")
macro = project.get_macro("Import Shipments")
task_list = macro.get_tasks()
print(task_list)
Copy 3 of the 7 tasks in the "Cost Data" macro in the "Data Cleansing and Aggregation NA Model" project to a new macro "Cost Data EMEA" in a new project "Data Cleansing and Aggregation EMEA". Do this by first copying the whole macro and then removing the tasks that are not required in this new macro:
from datastar import *
# connect to project and get macro to be copied into new project
project = Project.connect_to("Data Cleansing and Aggregation NA Model")
macro = project.get_macro("Cost Data")
# create new project and clone macro into it
new_project = Project.create("Data Cleansing and Aggregation EMEA")
new_macro = macro.clone(new_project,name="Cost Data EMEA",
description="Cloned from NA project; \
keep 3 transportation tasks")
# list the transportation cost related tasks to be kept and get a list
# of tasks present in the copied macro in the new project, so that we
# can determine which tasks to delete
tasks_to_keep = ["Start",
"Import Transportation Cost Data",
"Cleanse TP Costs",
"Aggregate TP Costs by Month"]
tasks_present = new_macro.get_tasks()
# go through tasks present in the new macro and
# delete if the task name is not in the "to keep" list
for task in tasks_present:
if task not in tasks_to_keep:
new_macro.delete_task(task)
Copy specific task "Import General Ledger" from the "Cost Data" macro in the "Data Cleansing and Aggregation NA Model" project to the "Cost Data EMEA" macro in the "Data Cleansing and Aggregation EMEA" project. Chain this copied task to the Start task:
from datastar import *
project_1 = Project.connect_to("Data Cleansing and Aggregation NA Model")
macro_1 = project_1.get_macro("Cost Data")
project_2 = Project.connect_to("Data Cleansing and Aggregation EMEA")
macro_2 = project_2.get_macro("Cost Data EMEA")
task_to_copy = macro_1.get_task("Import General Ledger")
start_task = macro_2.get_task("Start")
copied_task = macro_2.add_task(task_to_copy,
auto_join=False,
previous_task=start_task)
Creating a CSV file connection and a Cosmic Frog Model connection:
from datastar import *
shipments = DelimitedConnection(
name="Shipment Data",
path="/My Files/DataStar/Shipments.csv",
delimiter=","
)
cf_global_sc_strategy = FrogModelConnection(
name="Global SC Strategy CF Model",
model_name="Global Supply Chain Strategy"
)
Connect directly to a project's sandbox, read data into a pandas dataframe, transform it, and write the new dataframe into a new table "new_customers":
from datastar import *
# connect to project and get its sandbox
project = Project.connect_to("Import Historical Shipments")
sandbox = project.get_sandbox()
# use pandas to raed the "customers" table into a dataframe
df_customers = sandbox.read_table("customers")
# copy the dataframe into a new dataframe
df_new_customers = df_customers
# use pandas to change the customername column values format
# from CZ1, CZ20, etc to Cust_0001, Cust_0020, etc
df_new_customers['customername'] = df_new_customers['customername'].map(lambda x: x.lstrip('CZ'))
df_new_customers['customername'] = df_new_customers['customername'].str.zfill(4)
df_new_customers['customername'] = 'Cust_' + df_new_customers['customername']
# write the updates customers table with the new customername
# values to a new table "new_customers"
sandbox.write_table(df_new_customers, "new_customers")
End-to-end script - create a new project and add a new macro to it; add 7 tasks to the macro to import shipments data; create unique customers, unique distribution centers, and demand aggregated by customer and product from it. Then export these 3 tables to a Cosmic Frog model:
from datastar import *
#------------------------------------
# Create new project and add macro
#------------------------------------
project = Project.create("Scripting with DataStar",
description= "Show how to use a Python script to "
"create a DataStar project, add connections, create "
"a macro and its tasks, and run the macro.")
macro = project.add_macro(name="Populate 3 CF Model Tables")
#--------------------
# Get & set up connections
#--------------------
sandbox = project.get_sandbox()
cf_model = Connection.get_connection("Cosmic Frog Model")
shipments = DelimitedConnection(
name="May2024-Sept2025 Shipments",
path="/My Files/DataStar/shipments.csv",
delimiter=",")
#-----------------------
# Create tasks
#-----------------------
# Import Task to import the raw shipments from the shipments CSV connection
# into a table named raw_shipments in the project sandbox
import_shipments_task = macro.add_import_task(
name="Import historical shipments",
source_connection=shipments,
destination_connection=sandbox,
destination_table="raw_shipments")
# Add 3 run SQL tasks to create unique DCs, unique Customers, and Customer
# Demand (aggregated by customer and product from July 2024-June 2025)
# from the raw shipments data.
create_dc_task = macro.add_run_sql_task(
name="Create DCs",
connection=sandbox,
query="""
CREATE TABLE IF NOT EXISTS distribution_centers AS
SELECT DISTINCT origin_dc AS dc_name,
AVG(origin_latitude) AS dc_latitude,
AVG(origin_longitude) AS dc_longitude
FROM raw_shipments
GROUP BY dc_name;""")
create_cz_task = macro.add_run_sql_task(
name="Create customers",
connection=sandbox,
query="""
CREATE TABLE IF NOT EXISTS customers AS
SELECT DISTINCT destination_store AS cust_name,
AVG(destination_latitude) AS cust_latitude,
AVG(destination_longitude) AS cust_longitude
FROM raw_shipments
GROUP BY cust_name;""",
auto_join=False,
previous_task=import_shipments_task)
create_demand_task = macro.add_run_sql_task(
name="Create customer demand",
connection=sandbox,
query="""
CREATE TABLE IF NOT EXISTS customer_demand AS
SELECT destination_store AS cust_name,
productname,
SUM(units) AS demand_quantity
FROM raw_shipments
WHERE TO_DATE(ship_date, 'DD/MM/YYYY') BETWEEN
'2024-07-01' AND '2025-06-30'
GROUP BY cust_name, productname;""",
auto_join=False,
previous_task=import_shipments_task)
# Add 3 export tasks to populate the Facilities, Customers,
# and CustomerDemand tables in empty CF model connection
export_dc_task = macro.add_export_task(
name="Export distribution centers",
source_connection=sandbox,
source_table="distribution_centers",
destination_connection=cf_model,
destination_table="facilities",
destination_table_type="existing",
destination_table_action="replace",
mappings=[{"sourceType":"text","targetType":"text",
"sourceColumn":"dc_name","targetColumn":"facilityname"},
{"sourceType":"number","targetType":"text",
"sourceColumn":"dc_latitude","targetColumn":"latitude"},
{"sourceType":"number","targetType":"text",
"sourceColumn":"dc_longitude","targetColumn":"longitude"}],
auto_join=False,
previous_task=create_dc_task)
export_cz_task = macro.add_export_task(
name="Export customers",
source_connection=sandbox,
source_table="customers",
destination_connection=cf_model,
destination_table="customers",
destination_table_type="existing",
destination_table_action="replace",
mappings=[{"sourceType":"text","targetType":"text",
"sourceColumn":"cust_name","targetColumn":"customername"},
{"sourceType":"number","targetType":"text",
"sourceColumn":"cust_latitude","targetColumn":"latitude"},
{"sourceType":"number","targetType":"text",
"sourceColumn":"cust_longitude","targetColumn":"longitude"}],
auto_join=False,
previous_task=create_cz_task)
export_demand_task = macro.add_export_task(
name="Export customer demand",
source_connection=sandbox,
source_table="customer_demand",
destination_connection=cf_model,
destination_table="customerdemand",
destination_table_type="existing",
destination_table_action="replace",
mappings=[{"sourceType":"text","targetType":"text",
"sourceColumn":"cust_name","targetColumn":"customername"},
{"sourceType":"text","targetType":"text",
"sourceColumn":"productname","targetColumn":"productname"},
{"sourceType":"number","targetType":"text",
"sourceColumn":"demand_quantity","targetColumn":"quantity"}],
auto_join=False,
previous_task=create_demand_task)
#--------------------------------
# Position tasks on Macro Canvas
#--------------------------------
import_shipments_task.x = 250
import_shipments_task.y = 150
import_shipments_task.save()
create_dc_task.x = 500
create_dc_task.y = 10
create_dc_task.save()
create_cz_task.x = 500
create_cz_task.y = 150
create_cz_task.save()
create_demand_task.x = 500
create_demand_task.y = 290
create_demand_task.save()
export_dc_task.x = 750
export_dc_task.y = 10
export_dc_task.save()
export_cz_task.x = 750
export_cz_task.y = 150
export_cz_task.save()
export_demand_task.x = 750
export_demand_task.y = 290
export_demand_task.save()
#-----------------------------------------------------
# Run the macro and write regular progress updates
#-----------------------------------------------------
macro.run()
macro.wait_for_done(verbose=True)
With Optilogic’s new Teams feature set (see the "Getting Started with Optilogic Teams" help center article) working collaboratively on Cosmic Frog models has never been easier: all members of a team have access to all contents added to that team’s workspace. Centralizing data using Teams ensures there is a single source of truth for files/models which prevents version conflicts. It also enables real-time collaboration where files/models are seamlessly shared across all team members, and updates to any files/models are instantaneous for all team members.
However, whether your organization uses Teams or not, there can be a need to share Cosmic Frog models, for example to:
In this documentation we will cover how to share models, and the different options for sharing. Sharing models can be from an individual user or a team to an individual user or a team. As the risk of something undesirable happening with the model when multiple people work on it increases, it is important to be able to go back to a previous version of the model. Therefore, it is best practice to make a backup of a model prior to sharing it. Continue making backups when important/major changes are going to be made or when wanting to try out something new. How to make a backup of a model will be explained in this documentation too and will be covered first.
A backup of a model is a snapshot of its exact state at a certain point in time. Once a backup has been made, users can use them to revert to if needed. Initiating the creation of a backup of a Cosmic Frog model can be done from 3 locations within the Optilogic platform: 1) from the Models module within Cosmic Frog, 2) through the Explorer and 3) from within the Cloud Storage application on the Optilogic platform. The option from within Cosmic Frog will be covered first:

When in the Models module of Cosmic Frog (aka the Model Manager), hover over the model you want to create a backup for, and click on the icon with 3 horizontal dots that comes up at the bottom right of the model card (1). This brings up the model management options context menu, from which you can choose the Backup option (2). If only 1 model is selected, the Backup option can also be accessed from the toolbar at the top of the model list/grid (3).
From the Cloud Storage application it works as follows:

Through the Explorer, the process is similar:

Whether from the Models module within Cosmic Frog, through the Cloud Storage application or via the Explorer, in all 3 cases the Create Backup form comes up:

After clicking on Confirm, a notification at the top of the user’s screen will pop up saying that the creation of a backup has been started:

At the same time, a locked database icon with hover over text of “Backup in progress…” appears in the Status field of the model database (this is in the Cloud Storage application’s list of databases):

This locked database icon will disappear again once the backup is complete.
Users can check progress of the backup by going to the user’s Account menu under their username at the right top of the screen and selecting “Account Activity” from the drop-down menu:

To access any backups, users can expand individual model databases in the Cloud Storage application:

There are 2 more columns in the list of databases that are not shown in the screenshot above:

When choosing to restore a backup, the following form comes up:

Now that we have discussed how models can be backed up, we will cover how models can be shared. Note that it is best practice to make a backup of your model before sharing it.
If your organization uses Teams, first make sure you are in the correct workspace, either a Team’s or your personal My Account area, from which you want to share a model. You can switch between workspaces using the Team Hub application, which is explained in this "Optilogic Teams - User Guide" help center article.
Like making a backup of a model database, sharing a model can also be done through the Cloud Storage application and the Explorer. Starting with the Cloud Storage option:

The Share Model options can also be accessed through the Explorer:

Now we will cover the steps of sending a copy of a model to another user or team. The original and the copy are not connected to each other after the model was shared in this way: updates to one are not reflected in the other and vice versa.


After clicking on the Send Model Copy button, a message that says “Model Copy Sent Successfully” will be displayed in the Send Model Copy form. Users can go ahead and send copies of other models to other user(s)/teams(s) or close out of the form by clicking on the cross icon at the right top of the form.
In this example, a copy of the CarAssembly model was sent to the Onboarding team. In the Onboarding team’s workspace this model will then appear in the Explorer:

Next, we will step through transferring ownership of a model to another user or team. The original owner will no longer have access to the model after transferring ownership. In the example here, the Onboarding team will transfer ownership of the Tariffs model to an individual user.


After clicking on the Transfer Model Ownership button, a message that says “Transferred Ownership Successfully” will be displayed in the Transfer Model Ownership form. Users can go ahead and transfer ownership of other models to other user(s)/teams(s) or close out of the form by clicking on the cross icon at the right top of the form.
There will be a notification of the model ownership transfer in the workspace of the user/team that performed the transfer:

The model now becomes visible in the My Account workspace of the individual user the ownership of the model was transferred to:

Lastly, we will show the steps of sharing access to a model with a user or team. Note that Sharing Access to a model can be done from Explorer and from the Cloud Storage application (same as for the Send Copy and Transfer Ownership options), but can also be done from the Models module in Cosmic Frog:

In Cosmic Frog's Models module (aka Model Manager), hover over the model card of the model you wan to share access to and then click on the icon with 3 horizontal dots that comes up in the bottom right of the model card (1). Clicking on this icon brings up the model management actions context menu, from which you can choose the Share Access option (2). If only 1 model is selected, the Share Access option is also available from the model management actions toolbar at the top of the model list/grid (3).
In our walk-through example, an individual user will share access to a model called "Fleet Size Optimization - EMEA Geo" with the Onboarding team.



After the plus button was clicked to share access of the Fleet Size Optimization - EMEA Geo model with the Onboarding team, this team is now listed in the People with access list:

Now, in the Onboarding team’s workspace, we can access this model, of which the team receives a notification too:

Now that the Onboarding team has access to this model, they can share it with other users/teams too: they can either send a copy of it or share access, but they cannot transfer ownership as they are not the model’s owner.
In the Explorer of the workspace of the user/team who shared access to the model, a similar round icon with arrow inside it will be shown next to the model’s name. The icon colors are just inverted (blue arrow in white circle) and here the hover text is “You have shared this database”, see the screenshot below. There will also be a notification about having granted access to this model and to whom (not shown in the screenshot):

If the model owner decides to revoke access or change the permission level to a shared model, they need to open the Share Model Access form again by choosing Share Access from the Share Model options / clicking on the Share icon when hovering over the model's card on the Cosmic Frog start page:

If access to a model is revoked, the team/user that was previously granted access but now no longer will have access, receives a notification about this:

With Read-Only access, teammates and stakeholders can explore a shared model, view maps, dashboards, and analytics, and provide feedback — all while ensuring that the data remains unchanged and secure.
Read-Only mode is best suited for situations where protecting data integrity is a priority, for example:
See the Appendix for a complete list of actions and whether they are allowed in Read-Only Access mode or not.
Similar to revoking access to a previously shared model, in order to change the permission level of a shared model, user opens the Share Model Access form again by choosing Share Access from the Share Model options / clicking on the Share icon when hovering over the model's card on the Cosmic Frog start page:

Models with Read-Only access can be recognized on the Optilogic platform as follows:

Input tables of Read-Only Cosmic Frog models are greyed out (like output tables already are by default), and and write actions (insert, delete, modify) are disabled:

Read-Only models can be recognized as follows in other Optilogic applications:
When working with models that have shared access, please keep following in mind:
In addition to the various ways model files can be shared between users, there is a way to share a copy of all contents of a folder with another user/team too:

After clicking on the Create Share Link button, the share link is copied to the clipboard. A toast notification of this is temporarily displayed at the right top in the Optilogic platform. The user can paste the link and send it to the user(s) they want to share the contents of the folder with.
When a user who has received the share link copies it into their browser while logged into the Optilogic platform, the following form will be opened:

Folders copied using the share link option will be copied into a subfolder of the Sent To Me folder. The name of this subfolder will be the username / email of the user / team that sent the share link. The file structure of the sent folder will be maintained and appear the same as it was in the account of the sender of the share link.
See the View Share Links section in the Getting Started with the Explorer help center article on how to manage your share links.
Action - Allowed? - Notes:
The Model Manager in Cosmic Frog is the central place to create, view, organize, and maintain your supply chain models. It provides tools for quickly finding models, understanding their status, and performing common management actions such as editing, duplicating, and deleting models.
This guide walks you through the Model Manager interface step by step, explaining each major feature and control as it appears on screen. Screenshots are annotated with green outlines to highlight key areas, and numbered callouts are explained in corresponding lists so you can easily follow along.
When logged into the Optilogic platform, you can open Cosmic Frog by clicking on its icon in the list of applications on the left. Note that the order of the applications may be different in your list so you may need to scroll down:

After opening Cosmic Frog, the model manager will typically be the active module. However, if you have been working in a specific model in Cosmic Frog previously, it may immediately open that model with its Data module being the active module. In that case, you can open the Model Manager from within Cosmic Frog by clicking on the icon with 3 horizontal bars at the left top to open the Module Menu, then select Models:

The Model Manager screen displays a table or grid of your existing models along with high level details such as name, status, and last modified date. This view is designed to give you immediate insight into your model library. The following screenshot shows at a high level the different features of the model manager. Each feature will be explained in more detail in subsequent sections of this documentation:

In the following screenshot we have switched from card view to list view:

The Help & Hints area provides contextual guidance to help users understand the purpose of the Model Manager and how to use it effectively. These hints are especially useful for new users:

From the Model Manager there are a few options to create a new Cosmic Frog model, which will be covered in this section. We will start with the option from which new empty models as well as copies from Resource Library models can be created:

Another option to create a new model is to create one with tables populated from an Excel template, click on the Quick Start Model button to open the Create Quick Start Model form:

Note that for this option there is help available on the right-hand side of the form too, including example Excel template files, which can be downloaded to function as a starting point to overwrite with your own data. A video on this Quick Start option can be accessed from here too:

Finally, users also have the option to convert their Supply Chain Guru (SCG) models to a Cosmic Frog model. After clicking on the SCG Import button, the following Supply Chain Model Converter form comes up:

As your model library grows, search, sort, and filter tools help you quickly locate the models you need. The following screenshot shows the use of the search box and the available sort options:

Standard filters to reduce the list of models to those of interest are also available:

To learn more about sharing models, please see this "Model Sharing & Backups for Multi-User Collaboration in Cosmic Frog" help center article.
The Model Manager provides comprehensive model management capabilities through both the action toolbar and context menus. The first 2 screenshots in this section show these in card view, while the last screenshot shows the options while in list view:

The model management actions available from the toolbar and from the context menu of a model in card view are shown in this next screenshot:

Lastly, the next screenshot shows the Model Management Actions while in list view:
