Showing supply chains on maps is a great way to visualize them, to understand differences between scenarios, and to show how they evolve over time. Cosmic Frog offers users many configuration options to customize maps to their exact needs and compare them side-by-side. In this documentation we will cover how to create and configure maps in Cosmic Frog.
In Cosmic Frog, a map represents a single geographic visualization composed of different layers. A layer is an individual supply chain element such as a customer, product flow, or facility. To show locations on a map, these need to exist in the master tables (e.g. Customers, Facilities, and Suppliers) and they need to have been geocoded (see also the How to Geocode Locations section in this help center article). Flow based layers are based on output tables, such as the OptimizationFlowSummary or SimulationFlowSummary and to draw these, the model needs to have been run so outputs are present in these output tables.
Maps can be accessed through the Maps module in Cosmic Frog:

The Maps module opens and shows the first map in the Maps list; this will be the default pre-configured “Supply Chain” map for maps the user created and most models copied from the Resource Library:

In addition to what is mentioned under bullet 4 of the screenshot just above, users can also perform following actions on maps:

As we have seen in the screenshot above, the Maps module opens with a list of pre-configured maps and layers on the left-hand side:

The Map menu in the toolbar at the top of the Maps module allows users to perform basic map and layer operations:

These options from the Map menu are also available in the context menu that comes up when right-clicking on a map or layer in the Maps list.
The Map Filters panel can be used to set scenarios for each map individually. If users want to use the same scenario for all maps present in the model, they can use the Global Scenario Filter located in the toolbar at the top of the Maps module:

Now all maps in the model will use the selected scenario, and the option to set the scenario at the map-level is disabled.
When a global scenario has been set, it can be removed using the Global Scenario Filter again:

The zoom level, how the map is centered, and the configuration of maps and their layers persist. After moving between other modules within Cosmic Frog or switching between models, when user comes back to the map(s) in a specific model, the map settings are the same as when last configured.
Now let us look at how users can add new maps, and the map configuration options available to them.

Once done typing the name of the new map, the panel on the right-hand side of the map changes to the Map Filters panel which can be used to select the scenario and products the map will be showing. If the user wants to see a side-by-side map comparison of 2 scenarios in the model, this can be configured here too:

In the screenshot above, the Comparison toggle is hidden by the Product drop-down. In the next screenshot it is shown. By default, this toggle is off; when sliding it right to be on, we can configure which scenario we want to compare the previously selected scenario to:

Please note:
Instead of setting which scenario to use for each map individually on the Map Filters panel, users can instead choose to set a global scenario for all maps to use, as discussed above in the Global Scenario Filter section. If a global scenario is set, the Scenario drop-down on the Map Filters panel will be disabled and the user cannot open it:

On the Map Information panel, users have a lot of options to configure what the map looks like and what entities (outside of the supply chain ones configured in the layers) are shown on it:

Users can choose to show a legend on the map and configure it on the Map Legend pane:

To start visualizing the supply chain that is being modelled on a map, user needs to add at least 1 layer to a map, which can be done by choosing “New Layer” from the Map-menu:

Once a layer has been added or is selected in the Maps list, the panel on the right-hand side of the map changes to the Condition Builder panel which can be used to select the input or output table and any filters on it to be used to draw the layer:

We will now also look at using the Named Filters option to filter the table used to draw the map layer:

In this walk-through example, user chooses to enable the “DC1 and DC2” named filter:

Lastly on the Named Filters option, users have the option to view a grid preview to ensure the correct filtered records are being drawn on the map:

In the next layer configuration panel, Layer Style, users can choose what the supply chain entities that the layer shows will look like on the map. This panel looks somewhat different for layers that show locations (Type = Point) than for those that show flows (Type = Line). First, we will look at a point type layer (Customers):

Next, we will look at a line type layer, Customer Flows:

At the bottom of the Layer Style pane a Breakpoints toggle is available too (not shown in the screenshots above). To learn more about how these can be used and configured, please see the "Maps - Styling Points & Flows based on Breakpoints" Help Center article.
Labels and tooltips can be added to each layer, so users can more easily see properties of the entities shown in the layer. The Layer Labels configuration panel allows users to choose what to show as labels and tooltips, and configure the style of the labels:

When modelling multiple periods in network optimization (Neo) models, users can see how these evolve over time using the map:

Users can now add Customers, Facilities and Suppliers via the map:

After adding the entity, we see it showing on the map, here as a dark blue circle, which is how the Customers layer is configured on this map:

Looking in the Customers table, we notice that CZ_Philadelphia has been added. Note that while its latitude and longitude fields are set, other fields such as City, Country and Region are not automatically filled out for entities added via the map:

In this final section, we will show a few example maps to give users some ideas of what maps can look like. In this first screenshot, a map for a Transportation Optimization (Hopper engine) model, Transportation Optimization UserDefinedVariables available from Optilogic’s Resource Library (here), is shown:

Some notable features of this map are:
The next screenshot shows a map of a Greenfield (Triad engine) model:

Some notable features of this map are:
This following screenshot shows a subset of the customers in a Network Optimization (Neo engine) model, the Global Sourcing – Cost to Serve model available from Optilogic’s Resource Library (here). These customers are color-coded based on how profitable they are:

Some notable features of this map are:
Lastly, the following screenshot shows a map of the Tariffs example model, a network optimization (Neo engine) model available from Optilogic’s Resource Library (here), where suppliers located in Europe and China supply raw materials to the US and Mexico:

Some notable features of this map are:
We hope users feel empowered to create their own insightful maps. For any questions, please do not hesitate to contact Optilogic support at support@optilogic.com.
Users of the Optilogic platform can easily access all files they have in their Optilogic account and perform common tasks like opening, copying, and sharing them by using the built-in Explorer application. This application sits across all other applications on the Optilogic platform.
This documentation will walk users through how to access the Explorer, explain its folder and file structure, how to quickly find files of interest, and how to perform common actions.
By default, the Explorer is closed when users are logged into the Optilogic platform, they can open it at the top of the applications list:

Once the Explorer is open, your screen will look similar to the following screenshot:

This next screenshot shows the Explorer when it is open while the user is working inside the workspace of one of the teams they are part of, and not in their My Account workspace:

When a new user logs into their Optilogic account and opens the Explorer, they will find there are quite a few folders and files present in their account already. The next screenshot shows the expanded top-level folders:


As you may have noticed already, different file types can be recognized by the different icons to the left of the file’s name. The following table summarizes some of the common file types users may have in their accounts, shows the icon used for these in the Explorer, and indicates which application the file will be opened in when (left-)clicking on the file:

*When clicking on files of these types, the Lightning Editor application will be opened and a message stating that the file is potentially unsupported will be displayed. Users can click on a “Load Anyway” button to attempt to load the file in the Lightning Editor. If the user chooses to do so, the file will be loaded, but the result will usually be unintelligible for these file types.
Some file types can be opened in other applications on the Optilogic platform too. These options are available from the right-click context menus, see the “Right-click Context Menus” section further below.
Icons to the right of names of Cosmic Frog models in the Explorer indicate if the model is a shared one and if so, what type of access the user / team has to it. Hovering over these icons will show text describing the type of share too.

Learn more about sharing models and the details of read-write vs read-only access in the “Model Sharing & Backups for Multi-user Collaboration in Cosmic Frog” help center article.
While working on the Optilogic platform, additional files and folders can be created in / added to a user’s account. In this section we will discuss which applications create what types of files and where in the folder structure they can be found in the Explorer.
The Resource Library on the Optilogic platform contains example Cosmic Frog models, DataStar template projects, Cosmic Frog for Excel Apps, Python scripts, reference data, utilities, and additional tools to help make Optilogic platform users successful. Users can browse the Resource Library and copy content from there to their own account to explore further (see the “How to use the Resource Library” help center article for more details):

Please note that Cosmic Frog models copied from the Resource Library are placed into a subfolder with the model’s name under the Resource Library folder; they can be recognized in the Explorer by their frog icon to the left of the model’s name and the .frog extension.
In addition, please note that previously, files copied from the Resource Library were placed in a different location in users’ accounts and not in the Resource Library folder and its subfolders. The old location was a subfolder with the resource’s name under the My Files folder. Users who have been using the Optilogic platform for a while will likely still see this file structure for files copied from the Resource Library before this change was made.
Users can create new Cosmic Frog models from Cosmic Frog’s start page (see this help center article); these will be placed in a subfolder named “Cosmic Frog Models”, which sits under the My Files folder:

Users can create new DataStar projects from DataStar's start page (see this help center article); these will be placed in a subfolder named “DataStar”, which sits under the My Files folder. Within this DataStar folder, sub-folders with the names of the DataStar projects are created and the .dstar project files are located in these folders. In the following screenshot, we are showing 2 DataStar projects, 1 named "Cost to Serve Analysis" and 1 named "Create Customers":

DataStar users may upload files to use with their data connections through the DataStar application (see this help center article). These uploaded files are also placed in the /My Files/DataStar folder:

When working with any of the Cosmic Frog for Excel Apps (see also this help center article), the working files for these will be placed in subfolders under the My Files folder. These are named “z Working Folder for … App”:

In addition to the above-mentioned subfolders (Resource Library, Cosmic Frog Models, DataStar, and “z Working Folder for … App” folders) which are often present under the My Files top-level folder in a user’s Optilogic account, there are several other folders worth covering here:
Now that we have covered the folder and file structure of the Explorer including the default and common files and folders users may find here, it is time to cover how users can quickly find what they need using the options towards the top of the Explorer application.
There is a free type text search box at the top of the Explorer application, which users can use to quickly find files and folders that contain the typed text in their names:

There is a quick search option to find all DataStar projects in the user’s account:

Similarly, there is a quick search option to find all Cosmic Frog models in the user’s account:

There is also a quick filter function to find all PostgreSQL databases in a user's account:

Users can create share links for folders in their Optilogic account to send a copy of the folder and all its contents to other users. See this “Folder Sharing” section in the “Model Sharing & Backups for Multi-User Collaboration in Cosmic Frog” help center article on how to create and use share links. If a user has created any share links for folders in their account, these can be managed by clicking on the View Share Links icon:

When browsing through the files and folders in your Optilogic account, you may collapse and expand quite a few different folders and their subfolders. Users can at times lose track of where the file they had selected is located. To help with this, users have the “Expand to Selected File” option available to them:


In addition to using the Expand to Selected File option, please note that switching to another file in the Lightning Editor by for example clicking on the Facilities.csv file, will further expand the Explorer to show that file in the list too. If needed, the Explorer will also automatically scroll up or down to show the active file in the center of the list.
If you have many folders and subfolders expanded, it can be tedious to collapse them all one by one again. Therefore, users also have a “Collapse All” option at their disposal when working with the Explorer. The following screenshot shows the state of the Explorer before clicking on the Collapse All icon, which is the 6th of the 7 icons to the right of the Search box in the following screenshot:

The user then clicks on the Collapse All icon and the following screenshot shows the state of the Explorer after doing so:

Note that the Collapse All icon has now become inactive and will remain so until any folders are expanded again.
Sometimes when deleting, copying, or adding files or folders to a user’s account, these changes may not be immediately reflected in the Explorer files & folders list as they may take a bit of time. The last of the icons to the right of / underneath the Search box provides users with a “Refresh Files” option. Clicking on this icon will update the files and folders list such that all the latest are showing in the Explorer:

In this final section of the Explorer documentation, we will cover the options users have from the context menus that come up when right-clicking on files and folders in the Explorer. Screenshots and text will explain the options in the context menus for folders, Cosmic Frog models, text-based files, and all other files.
When right-clicking on a folder in the Explorer, users will see the following context menu come up (here the user right-clicked on the Model Library folder):

The options from this context menu are, from top to bottom:
Note that right-clicking on the Get Started Here folder gives fewer options: just the Copy (with the same 3 options as above), Share Link, and Delete Folder options are available for this folder.
Now, we will cover the options available from the context menu when right-clicking on different types of files, starting with Cosmic Frog models:

The options, from top to bottom, are:
Please note that the Cosmic Frog models listed in the Explorer are not actual databases, but pointer files. These are essentially empty placeholder files to let users visualize and interact with models inside the Explorer. Due to this, actions like downloading are not possible; working directly with the Cosmic Frog model databases can be done through Cosmic Frog or the SQL Editor.
Next, we will look at the right-click context menu for DataStar projects. The options here are very similar to those of Cosmic Frog models:

The options, from top to bottom, are:
When right-clicking on a Python script file, the following context menu will open:

The options, from top to bottom, are:
The next 2 screenshots show what it looks like when comparing 2 text-based files with each other:


Other text-based files, such as those with extensions of .csv, .txt, .md and .html have the same options in their context menus as those for Python script files, with the exception that they do not have a Run Module option. The next screenshot shows the context menu that comes up when right-clicking on a .txt file:

Other files, such as those with extensions of .pdf, .xls, .xlsx, .xlsm, .png, .jpg, .twb and .yxmd, have the same options from their context menus as Python scripts, minus the Compare and Run Module options. The following screenshot shows the context menu of a .pdf file:

As always, please feel free to let us know of any questions or feedback by contacting Optilogic support on support@optilogic.com.
This documentation covers which geo providers one can use with Cosmic Frog, and how they can be used for geocoding and distance and time calculations.
Currently, there are 5 geo providers that can be used for geocoding locations in Cosmic Frog: MapBox, Bing, Google, PTV, and PC*Miler. MapBox is the default provider and comes free of cost with Cosmic Frog. To use any of the other 4 providers, you will need to obtain a license key from the company and add this to Cosmic Frog through your Account. The steps to do so are described in this help article “Using Alternate Geocoding Providers”.
Different geocoding providers may specialize in different geographies; refer to your provider for guidelines.
Geocoding a location (e.g. a customer, facility or supplier) means finding the latitude and longitude for it. Once a location is geocoded it can be shown on a map in the correct location which helps with visualizing the network itself and building a visual story using model inputs and outputs that are shown on maps.

To geocode a location:

For costs and capacities to be calculated correctly, it may be needed to add transport distances and transport times to Cosmic Frog models. There are defaults that will be used if nothing is entered into the model, or users can populate these fields, either themselves or by using a Distance Lookup Utility. Here the tables where distances and times can be entered, what happens if nothing has been entered, and how users can utilize the Distance Lookup Utility will be explained.
There are multiple Cosmic Frog input tables that have input fields related to Transport Distance, and Transport Time, including Speed which can also be used to calculate transport time from a Transport Distance (time = distance / speed). These all have their own accompanying UOM (unit of measure) field. Here is an overview of the tables which contain Distance, Time and/or Speed fields:
For Optimization (Neo), this is the order of precedence that is applied when multiple tables and fields are used:

For Transportation (Hopper) models, this is the order of precedence when multiple tables and fields are being used:
To populate these input tables and their pertinent fields, user has following options:
Cosmic Frog users can find multiple handy utilities in the Utilities section of Cosmic Frog - here we will cover the Distance Lookup utility. This utility looks up transportation distances and times for origin-destination pairs and populates the Transit Matrix table. As Geo Providers, Bing, PC Miler and Azure can be used if the user has a license key for these. In addition, there is a free PC Miler-UltraFast option which can look up accurate road distances within the EU and North America without needing a license key. This is also a very fast way to lookup distances. A new free provider OLRouting has been added. This provider leverages valhalla, an open source routing engine for OpenStreetMap. It has global coverage and performs the lookups very fast as well. Lastly, the Great Circle Geo Provider option calculates the straight-line distance for origin-destination pairs based on latitudes & longitudes. We will look at the configuration options of the utility using the next 2 screenshots:


Note that when using the Great Circle geo provider for Distance calculations, only the Transport Distance field in the Transit Matrix table will be populated. The Transport Time will be calculated at run time using the Average Speed on the Model Settings table.
To finish up, we will walk through an example of using the Distance Lookup utility on a simple model with 3 customers (CZs) and 2 distribution centers (DCs), which are shown in the following 2 screenshots of the Customers and Facilities tables:


We can use Groups and/or Named Table Filters in the Transportation Policies table if we want to make 1 policy that represents all possible lanes from the DCs to the customers:

Next, we run the Distance Lookup utility with following settings:
This results in the following 6 records being added to the Transit Matrix table - 1 for each possible DC-CZ origin-destination pair:

Named Filters are an exciting new feature which allows users to create and save specific filters directly on grid views, to then be utilized seamlessly across all policies tables, scenario items and map layers. For example, if you create a filter named “DCs” in the Facilities table to capture all entries with “DC” in their designation, this Named Filter can then be applied in a policy table, providing a dynamic alternative to the traditional Group function.
Unlike Groups, named filters automatically update: adding or removing a DC record in the Facilities table will instantly reflect in the Named Filter, streamlining the workflow and eliminating the need for manual updates. Additionally, when creating Scenario Items or defining Map Layers, users can easily select Named Filters to represent specific conditions, easily previewing the data, making the process much quicker and simpler.
In this help article, how Named Filters are created will be covered first. In the sections after, we will discuss how Named Filters can be used on input tables, in scenario items, and on map layers, while the final section contains a few notes on deleting Named Filters.
Named Filters can be set up and saved on any Cosmic Frog table: input tables, output tables, and custom tables. These tables are found in the Data module of a Cosmic Frog model:


A quick description of each of the options available in the Filter drop-down menu follows here, we will cover most of these in more detail in the remainder of this Help Article:
Let’s walk through setting up a filter on the Facilities table that filters out records where the Facility Name ends in “DC” and save it as a named filter called “DCs”:

After setting up this filter, click on Add Filter in the Filter drop-down menu to save it:


There is a right-click context menu available for filters listed in the Named Filters pane, which allows the user to perform some of the same actions as those in the main Filter menu shown above:

Next, we will see how multiple Named Filters can be applied to a table. In the example we will use, there are 4 Named Filters set up on the Facilities table:

Next, we will apply another Named Filter in addition to this first one ("USA locations"). How the Named Filters work together depends on if they are filtering on the same field or on different fields:
Now, if we want to filter out only Ports located in the USA, we can apply 2 of the Named Filters simultaneously:

To show an example of how multiple named filters that filter on the same field work, we will add a third Named Filter:

To alter an existing filter, we can change the criteria of this existing filter, and then save the resulting filter as a new Named Filter. Let’s illustrate this through an example: in a model with about 1.3k customers in the US, we have created a Named Filter “US Midwestern States”, but later on realize that we have not include the state of Michigan as 1 of the 12 Regions to include in this filter:


Now the US Midwestern States Named Filter with this one specific addition is applied. However, this does not update the US Midwestern States Named Filter. The table will now have the field name of Region as the indication of what type of filter is applied a the right top. Click on Add Filter in the Filter drop-down and give the Named Filter a new unique name. Naming it the same as an existing Named Filter to override it is not possible. Say we call the new Named Filter “US Midwestern States New”:

If the US Midwestern States filter was already used on any other tables, scenario items or map layers (see sections below on how to), then the easiest way to start using the new filter with Michigan added to it instead of it would be to 1) either delete or rename the US Midwestern States filter (for example to US Midwestern States Old), and then 2) rename the US Midwestern States New filter to US Midwestern States.
In this example, we added a condition to a field that already had multiple conditions on it. Similar steps can be used to update Named Filters where conditions are removed from fields or conditions are added to fields that do not have conditions on them yet.
So far, the only examples were of filters applied to one field in an input table. The next example shows a Named Filter called “CZ2* Space Suit demand >6k” on the Customer Demand input table:

Conditions were applied to 3 fields in the Customer Demand table, as follows: 1) Customer Name Begins With “CZ2”, 2) Product Name Contains “Space”, and 3) Quantity Greater Than “6000”. The resulting filter was saved as a Named Filter with the name “CZ2* Space Suit demand >6k” which is applied in the screenshot above. When hovering over this Named Filter, we indeed see the 3 fields and that they each have a single condition on them.
Besides being able to create Named Filters on input tables, they can also be created for output and custom tables. On output tables this can expedite the review of results after running additional scenarios where one can apply a pre-saved set of Named Filters one after the other once the runs are done instead of having to re-type each filter that shows the outputs of interest each time. This example shows a Named Filter on the Optimization Facility Summary output table to show records where the Throughput Utilization is greater than 0.8:

Named Filters for certain model elements (i.e. Customers, Facilities, Suppliers, Products, Periods, Modes, Shipments, Transportation Assets, Processes, Bills Of Materials, Work Centers, and Work Resources) can be used in other input tables, very similar to how Groups work in Cosmic Frog: instead of setting up multiple records for individual elements, for example a transportation policy from A to B for each finished good, a Named Filter that filters out all finished goods on the Products table can be used to set up 1 transportation policy for these finished goods from A to B (which at run-time will be expanded into a policy for each finished good). The advantage of using Named Filters instead of Groups is that Named Filters are dynamic. If records are added to tables containing model elements and they match the conditions of any Named Filters, they are automatically added to those Named Filters. Think for example of Products with the pre-fix FG_ to indicate they are finished goods and a Named Filter “Finished Goods” that filters the Product Name on Begins With “FG_”. If a new product is added where the Product Name starts with FG_, it is automatically added to the Finished Goods Named Filter and anywhere this filter is used this new finished good is now included too. We will look at 2 examples in the next few screenshots.

The completed transportation policy record uses Named Filters for the Origin Name, Destination Name, and Product Name, making this record flexible as long as the naming conventions of the factories, ports, and raw materials keep following the same rules.

The next example is on one of the Constraints tables, Production Constraints. On the Constraints tables, the Group Behavior fields dictate how an element name that is a Group or a Named Filter should be used. When set to Enumerate, the constraint is applied to each individual member of the group or named filter. If it is set to Aggregate, the constraint applies to all members of the group or named filter together. This Production Constraint states that at each factory a maximum amount of 150,000 units over all finished goods together can be produced:

When setting up a scenario item, previously, the records that the scenario item’s change needed to be applied to could be set by using the Condition Builder. Now users have the added option to use a saved Named Filter instead, which makes it easier as the user does not need to know the syntax for building a condition, and it also makes it more flexible as Named Filters are dynamic as was discussed in the previous section. In addition, users can preview the records that the change will be made to so the chance of mistakes is reduced.
Please note that a maximum of 1 Named Filter can be used on a scenario item.
The following example changes the Suppliers table of a model which has around 70 suppliers, about half of these are in Europe and the other half in China:


The Named Filters drop-down collapses after choosing the China Suppliers Named Filter as the condition, and now we see the Preview of the filtered grid. This is the Suppliers table with the Named Filter China Suppliers applied. At the right top of the grid the name of the applied Named Filter(s) is shown, and we can see that in the preview we indeed only see Suppliers for which the Country is China. So these are the records the change (setting Status = Exclude) will be made to in scenarios that use this scenario item.
A few notes on the Filter Grid Preview:
Besides using Named Filters on other input tables and for setting up conditions for scenario items, they can also be used as conditions for Map Layers, which will be covered in this final section of this Help Article. Like for scenario items, there is also a Filter Grid Preview for Map Layers to double-check which records will be filtered out when applying the condition(s) of 1 or multiple Named Filters.
In this first example, a Named Filter on the Facilities table filters out only the Facilities that are located in the USA:


Another example of the same model is to use a different Named Filter from the Facilities table to show only Factories on the map:

If the Factories and the Ports Named Filters had both been enabled, then all factories and ports would be showing on the map. So, like for scenario items, applying multiple Named Filters to a Map Layer is additive (acts like OR statements).
The same notes that were listed for the Filter Grid Preview for scenario items apply to the Filter Grid Preview for Map Layers too: columns with conditions have the filter icon on them, users can resize and (multi-)sort the columns, however, re-ordering the columns is not possible.
Named Filters can be deleted, and this affects other input tables, scenario items, and map layers that used the now deleted Named Filter(s) in slightly different ways. This will be explained in this final section of the Help Article on Named Filters.
A Named Filter can be deleted by using the Filter Menu’s “Delete Filter” option or by choosing that same option from the right click context menu of a Named Filter. One first important note is that the user will not be asked for a confirmation after choosing the Delete Filter option, it will be deleted immediately.
The results of deleting a Named Filter that was used are as follows:

Removing the deleted Named Filter(s) from these Map Layers resolves the issues.
Please watch this 5-minute video for an overview of DataStar, Optilogic’s new AI-powered data application designed to help supply chain teams build and update models & scenarios and power apps faster & easier than ever before!
For detailed DataStar documentation, please see Navigating DataStar on the Help Center.
DataStar is Optilogic’s new AI-powered data product designed to help supply chain teams build and update models & scenarios and power apps faster & easier than ever before. It enables users to create flexible, accessible, and repeatable workflows with zero learning curve—combining drag-and-drop simplicity, natural language AI, and deep supply chain context.
Today, up to an estimated 80% of a modeler's time is spent on data—connecting, cleaning, transforming, validating, and integrating it to build or refresh models. DataStar drastically shrinks that time, enabling teams to:
The 2 main goals of DataStar are 1) ease of use, and 2) effortless collaboration, these are achieved by:
In this documentation, we will start with a high-level overview of the DataStar building blocks. Next, creating projects and data connections will be covered before diving into the details of adding tasks and chaining them together into macros, which can then be run to accomplish the data goals of your project.
Please see this "Getting Started with DataStar: Application Overview" video for a quick 5-minute overview of DataStar.
Before diving into more details in later sections, this section will describe the main building blocks of DataStar, which include Data Connections, Projects, Macros, and Tasks.
Since DataStar is all about working with data, Data Connections are an important part of DataStar. These enable users to quickly connect to and pull in data from a range of data sources. Data Connections in DataStar:

Projects are the main container of work within DataStar. Typically, a Project will aim to achieve a certain goal by performing all or a subset of importing specific data, then cleansing, transforming & blending it, and finally publishing the results to another file/database. The scope of DataStar Projects can vary greatly, think for example of following 2 examples:
Projects consist of one or multiple macros which in turn consist of 1 or multiple asks. Tasks are the individual actions or steps which can be chained together within a macro to accomplish a specific goal.
The next screenshot shows an example Macro called "Transportation Policies" which consists of 7 individual tasks that are chained together to create transportation policies for a Cosmic Frog model from imported Shipments and Costs data:

Every project by default contains a Data Connection named Project Sandbox. This data connection is not global to all DataStar projects; it is specific to the project it is part of. The Project Sandbox is a Postgres database where users generally import the raw data from the other data connections into, perform transformations in, save intermediate states of data in, and then publish the results out to a Cosmic Frog model (which is a data connection different than the Project Sandbox connection). It is also possible that some of the data in the Project Sandbox is the final result/deliverable of the DataStar Project or that the results are published into a different type of file or system that is set up as a data connection rather than into a Cosmic Frog model.
The next diagram shows how Data Connections, Projects, and Macros relate to each other in DataStar:

On the start page of DataStar, the user will be shown their existing projects and data connections. They can be opened, or deleted here, and users also have the ability to create new projects and data connections from this start page.
The next screenshot shows the existing projects in card format:

New projects can be created by clicking on the Create Project button in the toolbar at the top of the DataStar application:

If on the Create Project form a user decides they want to use a Template Project rather than a new Empty Project, it works as follows:

These template projects are also available on Optilogic's Resource Library:

After the copy process completes, we can see the project appear in the Explorer and in the Project list in DataStar:

Note that any files needed for data connections in template projects copied from the Resource Library can be found under the "Sent to Me" folder in the Explorer. They will be in a subfolder named @datastartemplateprojects#optilogic (the sender of the files).
The next screenshot shows the Data Connections that have already been set up in DataStar in list view:

New data connections can be created by clicking on the Create Data Connection button in the toolbar at the top of the DataStar application:

The remainder of the Create Data Connection form will change depending on the type of connection that was chosen as different types of connections require different inputs (e.g. host, port, server, schema, etc.). In our example, the user chooses CSV Files as the connection type:

In our walk-through here, the user drags and drops a Shipments.csv file from their local computer on top of the Drag and drop area:

Now let us look at a project when it is open in DataStar. We will first get a lay of the land with a high-level overview screenshot and then go into more detail for the different parts of the DataStar user interface:

Next, we will dive a bit deeper into a macro:

The Macro Canvas for the Customers from Shipments macro is shown in the following screenshot:

In addition to the above, please note following regarding the Macro Canvas:

We will move on to covering the 2 tabs on the right-hand side pane, starting with the Tasks tab. Keep in mind that in the descriptions of the tasks below, the Project Sandbox is a Postgres database connection. The following tasks are currently available:

Users can click on a task in the tasks list and then drag and drop it onto the macro canvas to incorporate it into a macro. Once added to a macro, a task needs to be configured; this will be covered in the next section.
When adding a new task, it needs to be configured, which can be done on the Configuration tab. When a task is newly dropped onto the Macro Canvas its Configuration tab is automatically opened on the right-hand side pane. To make the configuration tab of an already existing task active, click on the task in the Macros tab on the left-hand side pane or click on the task in the Macro Canvas. The configuration options will differ by type of task, here the Configuration tab of an Import task is shown as an example:


Please note that:
The following table provides an overview of what connection type(s) can be used as the source / destination / target connection by which task(s), where PG is short for a PostgreSQL database connection and CF for a Cosmic Frog model connection:

Leapfrog in DataStar (aka D* AI) is an AI-powered feature that transforms natural language requests into executable DataStar Update and Run SQL tasks. Users can describe what they want to accomplish in plain language, and Leapfrog automatically generates the corresponding task query without requiring technical coding skills or manual inputs for task details. This capability enables both technical and non-technical users to efficiently manipulate data, build Cosmic Frog models, and extract insights through conversational interactions with Leapfrog within DataStar.
Note that there are 2 appendices at the end of this documentation where 1) details around Leapfrog in DataStar's current features & limitations are covered and 2) Leapfrog's data usage and security policies are summarized.


Leapfrog’s response to this prompt is as follows:

DROP TABLE IF Exists customers;
CREATE TABLE customers AS
SELECT
destination_store AS customer,
AVG(destination_latitude) AS latitude,
AVG(destination_longitude) AS longitude
FROM rawshipments
GROUP BY destination_storeTo help users write prompts, the tables present in the Project Sandbox and their columns can be accessed from the prompt writing box by typing an @:


This user used the @ functionality repeatedly to write their prompt as follows, which helped to generate their required Run SQL task:

Now, we will also have a look at the Conversations tab while showing the 2 tabs in Split view:

Within a Leapfrog conversation, Leapfrog remembers the prompts and responses thus far. Users can therefore build upon previous questions, for example by following up with a prompt along the lines of “Like that, but instead of using a cutoff date of August 10, 2025, use September 24, 2025”.
Additional helpful DataStar Leapfrog links:
Users can run a Macro by selecting it and then clicking on the green Run button at the right top of the DataStar application:

Please note that:

Next, we will cover the Logs tab at the bottom of the Macro Canvas where logs of macros that are running/have been run can be found:

When a macro has not yet been run, the Logs tab will contain a message with a Run button, which can also be used to kick off a macro run. When a macro is running or has been run, the log will look similar to the following:

The next screenshot shows the log of a run of the same macro where the third task ended in an error:

The progress of DataStar macro and task runs can also be monitored in the Run Manager application where runs can be cancelled if needed too:

Please note that:
In the Data Connections tab on the left-hand side pane the available data connections are listed:

Next, we will have a look at what the connections list looks like when the connections have been expanded:

The tables within a connection can be opened within DataStar. They are then displayed in the central part of DataStar where the Macro Canvas is showing when a macro is the active tab.
Please note:

A table can be filtered based on values in one or multiple columns:


Columns can be re-ordered and hidden/shown as described in the Appendix; this can be done using the Columns fold-out pane too:

Finally, filters can also be configured from a fold-out pane:

Users can explore the complete dataset of connections with tables larger than 10k records in other applications on the Optilogic platform, depending on the type of connection:
Here is how to find the database and table(s) of interest in SQL Editor:

Here are a few additional links that may be helpful:
We hope you are as excited about starting to work with DataStar as we are! Please stay tuned for regular updates to both DataStar and all the accompanying documentation. As always, for any questions or feedback, feel free to contact our support team at support@optilogic.com.
The grids used in DataStar can be customized and we will cover the options available through the screenshot below. This screenshot is of the list of CSV files in user's Optilogic account when creating a new CSV File connection. The same grid options are available on the grid in the Logs tab and when viewing tables that are part of any Data Connections in the central part of DataStar.

Leapfrog's brainpower comes from:
All training processes are owned and managed by Optilogic — no outside data is used.
When you ask Leapfrog a question:
Your conversations (prompts, answers, feedback) are stored securely at the user level.
In this quick start guide we will walk-through importing a CSV file into the Project Sandbox of a DataStar project. The steps involved are:
Our example CSV file is one that contains historical shipments from May 2024 through August 2025. There are 42,656 records in this Shipments.csv file, and if you want to follow along with the steps below you can download a zip-file containing it here (please note that the long character string at the beginning of the zip's file name is expected).
Open the DataStar application on the Optilogic platform and click on the Create Data Connection button in the toolbar at the top:

In the Create Data Connection form that comes up, enter the name for the data connection, optionally add a description, and select CSV Files from the Connection Type drop-down list:

If your CSV file is not yet on the Optilogic platform, you can drag and drop it onto the “Drag and drop” area of the form to upload it to the /My Files/DataStar folder. If it is already on the Optilogic platform or after uploading it through the drag and drop option, you can select it in the list of CSV files. Once selected it becomes greyed out in the list to indicate it is the file being used; it is also pinned at the top of the list with darker background shade so users know without scrolling which file is selected. Note that you can filter this list by typing in the Search box to quickly find the desired file. Once the file is selected, clicking on the Add Connection button will create the CSV connection:

After creating the connection, the Data Connections tab on the DataStar start page will be active, and it shows the newly added CSV connection at the top of the list (note the connections list is shown in list view here; the other option is card view):

You can either go into an existing DataStar project or create a new one to set up a Macro that will import the data from the Historical Shipments CSV connection we just set up. For this example, we create a new project by clicking on the Create Project button in the toolbar at the top when on the start page of DataStar. Enter the name for the project, optionally add a description, change the appearance of the project if desired by clicking on the Edit button, and then click on the Add Project button:

After the project is created, the Projects tab will be shown on the DataStar start page. Click on the newly created project to open it in DataStar. Inside DataStar, you can either click on the Create Macro button in the toolbar at the top or the Create a Macro button in the center part of the application (the Macro Canvas) to create a new macro which will then be listed in the Macros tab in the left-hand side panel. Type the name for the macro into the textbox:

When a macro is created, it automatically gets a Start task added to it. Next, we open the Tasks tab by clicking on tab on the left in the panel on the right-hand side of the macro canvas. Click on Import and drag it onto the macro canvas:

When hovering close to the Start task, it will be suggested to connect the new Import task to the Start task. Dropping the Import task here will create the connecting line between the 2 tasks automatically. Once the Import task is placed on the macro canvas, the Configuration tab in the right-hand side panel will be opened. Here users can enter the name for the task, select the data connection that is the source for the import (the Historical Shipments CSV connection), and the data connection that is the destination of the import (a new table named “rawshipments” in the Project Sandbox):

If not yet connected automatically in the previous step, connect the Import Raw Shipments task to the Start task by clicking on the connection point in the middle of the right edge of the Start task, holding the mouse down and dragging the connection line to the connection point in the middle of the left edge of the Import Raw Shipments task. Next, we can test the macro that has been set up so far by running it: either click on the green Run button in the toolbar at the top of DataStar or click on the Run button in the Logs tab at the bottom of the macro canvas:

You can follow the progress of the Macro run in the Logs tab and once finished examine the results on the Data Connections tab. Expand the Project Sandbox data connection to open the rawshipments table by clicking on it. A preview of the table of up to 10,000 records will be displayed in the central part of DataStar:

In this quick start guide we will show how Leapfrog AI can be used in DataStar to generate tasks from natural language prompts, no coding necessary!
This quick start guide builds upon the previous one where a CSV file was imported into the Project Sandbox, please follow the steps in there first if you want to follow along with the steps in this quick start. The starting point for this quick start is therefore a project named Import Historical Shipments that has a Historical Shipments data connection of type = CSV, and a table in the Project Sandbox named rawshipments, which contains 42,656 records.
The Shipments.csv file that was imported into the rawshipments table has following data structure (showing 5 of the 42,656 records):

Our goal in this quick start is to create a task using Leapfrog that will use this data (from the rawshipments table in the Project Sandbox) to create a list of unique customers, where the destination stores function as the customers. Ultimately, this list of customers will be used to populate the Customers input table of a Cosmic Frog model. A few things to consider when formulating the prompt are:
Within the Import Historical Shipments DataStar project, click on the Import Shipments macro to open it in the macro canvas, you should see the Start and Import Raw Shipments tasks on the canvas. Then open Leapfrog by clicking on the Ask Leapfrog AI button to the right in the toolbar at the top of DataStar. This will open the Leapfrog tab where a welcome message will be shown. Next, we can write our prompt in the “Write a message…” textbox.

Keeping in mind the 5 items mentioned above, the prompt we use is the following: “Use the @rawshipments table to create unique customers (use the @rawshipments.destination_store column); average the latitudes and longitudes. Only use records with the @rawshipments.ship_date between July 1 2024 and June 30 2025. Match to the anura schema of the Customers table”. Please note that:
After clicking on the send icon to submit the prompt, Leapfrog will take a few seconds to consider the prompt and formulate a response. The response will look similar to the following screenshot, where we see from top to bottom:

For copy-pasting purposes, the resulting SQL Script is repeated here:
DROP TABLE IF EXISTS customers;
CREATE TABLE customers AS
SELECT
destination_store AS customername,
AVG(destination_latitude) AS latitude,
AVG(destination_longitude) AS longitude
FROM rawshipments
WHERE
TO_DATE(ship_date, 'DD/MM/YYYY') >= '2024-07-01'::DATE
AND TO_DATE(ship_date, 'DD/MM/YYYY') <= '2025-06-30'::DATE
GROUP BY destination_store;
Those who are familiar with SQL, will be able to tell that this will indeed achieve our goal. Since that is the case, we can click on the Add to Macro button at the bottom of Leapfrog’s response to add this as a Run SQL task to our Import Shipments macro. When hovering over this button, you will see Leapfrog suggests where to put it on the macro canvas and to connect it to the Import Raw Shipments task, which is what we want. When next clicking on the Add to Macro button it will be added.

We can test our macro so far, by clicking on the green Run button at the right top of DataStar. Please note that:
Once the macro is done running, we can check the results. Go to the Data Connections tab, expand the Project Sandbox connection and click on the customers table to open it in the central part of DataStar:

We see that the customers table resulting from running the Leapfrog-created Run SQL task contains 1,333 records. Also notice that its schema matches that of the Customers table of Cosmic Frog models, which includes columns named customername, latitude, and longitude.
Writing prompts for Leapfrog that will create successful responses (e.g. the SQL Script generated will achieve what the prompt-writer intended) may take a bit of practice. This Mastering Leapfrog for SQL Use Cases: How to write Prompts that get Results post on the Frogger Pond community portal has some great advice which applies to Leapfrog in DataStar too. It is highly recommended to give it a read; the main points of advice follow here too:
As an example, let us look at variations of the prompt we used in this quick start guide, to gauge the level of granularity needed for a successful response. In this table, the prompts are listed from least to most granular:
Note that in the above prompts, we are quite precise about table and column names and no typos are made by the prompt writer. However, Leapfrog can generally manage well with typos and often also pick up table and column names when not explicitly used in the prompt. So while generally being more explicit results in higher accuracy, it is not necessary to always be extremely explicit and we just recommend to be as explicit as you can be.
In addition, these example prompts do not use the @ character to specify tables and columns to use, but they could to facilitate prompt writing further.
In this quick start guide we will walk through the steps of exporting data from a table in the Project Sandbox to a table in a Cosmic Frog model.
This quick start guide builds upon a previous one where unique customers were created from historical shipments using a Leapfrog-generated Run SQL task. Please follow the steps in that quick start guide first if you want to follow along with the steps in this one. The starting point for this quick start is therefore a project named Import Historical Shipments, which contains a macro called Import Shipments. This macro has an Import task and a Run SQL task. The project has a Historical Shipments data connection of type = CSV, and the Project Sandbox contains 2 tables named rawshipments (42,656 records) and customers (1,333 records).
The steps we will walk through in this quick start guide are:
First, we will create a new Cosmic Frog model which does not have any data in it. We want to use this model to receive the data we export from the Project Sandbox.
As shown with the numbered steps in the screenshot below: while on the start page of Cosmic Frog, click on the Create Model button at the top of the screen. In the Create Frog Model form that comes up, type the model name, optionally add a description, and select the Empty Model option. Click on the Create Model button to complete the creation of the new model:

Next, we want to create a connection to the just created empty Cosmic Frog model in DataStar. To do so: open your DataStar application, then click on the Create Data Connection button at the top of the screen. In the Create Data Connection form that comes up, type the name of the connection (we are using the same name as the model, i.e. “Empty CF Model for DataStar Export”),optionally add a description, select Cosmic Frog Models in the Connection Type drop-down list, click on the name of the newly created empty model in the list of models, and click on Add Connection. The new data connection will now be shown in the list of connections on the Data Connections tab (shown in list format here):

Now, go to the Projects tab, and click on the “Import Historical Shipments” project to open it. We will first have a look at the Project Sandbox and the empty Cosmic Frog model connections, so click on the Data Connections tab:

The next step is to add and configure an Export Task to the Import Shipments macro. Click on the Macros tab in the panel on the left-hand side, and then on the Import Shipments macro to open it. Click on the Export task in the Tasks panel on the right-hand side and drag it onto the Macro Canvas. If you drag it close to the Run SQL task, it will automatically connect to it once you drop the Export task:

The Configuration panel on the right has now become the active panel:

Click on the AutoMap button, and in the message that comes up, select either Replace Mappings or Add New Mappings. Since we have not mapped anything yet, the result will be the same in this case. After using the AutoMap option, the mapping looks as follows:

We see that each source column is now mapped to a destination column of the same name. This is what we expect, since in the previous quick start guide, we made sure to tell Leapfrog when generating the Run SQL task for creating unique customers to match the schema of the customers table in Cosmic Frog models (“the Anura schema”).
If the Import Shipments macro has been run previously, we can just run the new Export Customers task by itself (hover over the task in the Macro Canvas and click on the play button that comes up), otherwise we can choose to run the full macro by clicking on the green Run button at the right top. Once completed, click on the Data Connections tab to check the results:

Above, the AutoMap functionality was used to map all 3 source columns to the correct destination columns. Here, we will go into some more detail on manually mapping and additional options users have to quickly sort and filter the list of mappings.

In this quick start guide we will walk through the steps of modifying data in a table in the Project Sandbox using Update tasks. These changes can either be made to all records in a table or a subset based on a filtering condition. Any PostgreSQL function can be used when configuring the update statements and conditions of Update tasks.
This quick start guide builds upon a previous one where unique customers were created from historical shipments using a Leapfrog-generated Run SQL task. Please follow the steps in that quick start guide first if you want to follow along with the steps in this one. The starting point for this quick start is therefore a project named “Import Historical Shipments”, which contains a macro called Import Shipments. This macro has an Import task and a Run SQL task. The project has a Historical Shipments data connection of type = CSV, and the Project Sandbox contains 2 tables named rawshipments (42,656 records) and customers (1,333 records). Note that if you also followed one of the other quick start guides on exporting data to a Cosmic Frog model (see here), your project will also contain an Export task, and a Cosmic Frog data connection; you can still follow along with this quick start guide too.
The steps we will walk through in this quick start guide are:
We have a look at the customers table which was created from the historical shipment data in the previous 2 quick start guides, see the screenshot below. Sorting on the customername column, we see that they are ordered in alphabetical order. This is because the customer name column is of type text as it starts with the string “CZ”. This leads to them not being ordered based on the number part that follows the “CZ” prefix.

If we want ordering customer names alphabetically to result in an order that is the same as sorting the number part of the customer name, we need to make sure each customer name has the same number of digits. We will use Update tasks to change the format of the number part of the customer names so that they are all 4 digits by adding leading 0’s to those that have less than 4 digits. While we are at it, we will also replace the “CZ” prefix with “Cust_” to make the data consistent with other data sources that contain customer names. We will break the updates to the customer name column up into 3 steps using 3 Update tasks initially. At the end, we will see how they can be combined into a single Update task. The 3 steps are:
Let us add the first Update task to our Import Shipments macro:

After dropping the Update task onto the macro canvas, its configuration tab will be opened automatically on the right-hand side:

If you have not already, click on the plus button to add your first update statement:

Next, we will write the expression for which we can use the Expression Builder area just below the update statements table. What we type there will also be added to the Expression column of the selected Update Statement. These expressions can use any PostgreSQL function, also those which may not be pre-populated in the helper lists. Please see the PostgreSQL documentation for all available functions.

When clicking in the Expression Builder, an equal sign is already there, and a list of items comes up. At the top are the columns that are present in the target table and below those is a list of string functions which we can select to use. Here, the functions shown are string functions, since we are working on a text type column, when working on column with a different data type, other functions, those relevant to the data type, will be shown. We will select the last option shown in the screenshot, the substring function, since we want to first remove the “CZ” from the start of the customer names:

The substring function needs at least 2 arguments, which will be specified in the parentheses. The first argument needs to be the customername column in our case, since that is the column containing the string values we want manipulate. After typing a “c”, the customername column and 2 functions starting with “c” are suggested in the pop-up list. We choose the customername column. The second argument specifies the start location from where we want to start the substring. Since we want to remove the “CZ”, we specify 3 as the start location, leaving characters number 1 and 2 off. The third argument is optional; it indicates the end location of the substring. We do not specify it, meaning we want to keep all characters starting from character number 3:

We can run this task now without specifying a Condition (see section further below) in which case the expression will be applied to all records in the customers table. After running the task, we open the customers table to see the result:

We see that our intended change was made. The “CZ” is removed from the customer names. Sorted alphabetically, they still are not in increasing order of the number part of their name. Next, we use the lpad (left pad) function to add leading zeroes so all customer names consist of 4 digits. This function has 3 arguments: the string to apply the left padding to (the customername column), the number of characters the final string needs to have (4), and the padding character (‘0’).

After running this task, the customername column values are as follows:

Now with the leading zeroes and all customer names being 4 characters long, sorting alphabetically results in the same order as sorting by the number part of the customer name.
Finally, we want to add the prefix “Cust_”. We use the concat (concatenation) function for this. At first, we type Cust_ with double quotes around it, but the squiggly red line below the expression in the expression builder indicates this is not the right syntax. Hovering over the expression in the expression builder explains the problem:

The correct syntax for using strings in these functions is to use single quotes:

Instead of concat we can also use “= ‘Cust_’ || customername” as the expression. The double pipe symbol is used in PostgreSQL as the concatenation operator.
Running this third update task results in the following customer names in the customers table:

Our goal of how we wanted to update the customername column has been achieved. Our macro now looks as follows with the 3 Update tasks added:

The 3 tasks described above can be combined into 1 Update task by nesting the expressions as follows:

Running this task instead of the 3 above will result in the same changes to the customername column in the customers table.
Please note that in the above we only specified one update statement in each Update task. You can add more than one update statement per update task, in which case:
As mentioned above, the list of suggested functions is different depending on the data type of the column being updated. This screenshot shows part of the suggested functions for a number column:

At the bottom of Expression Builder are multiple helper tabs to facilitate quickly building your desired expressions. The first one is the Function Helper which lists the available functions. The functions are listed by category: string, numeric, date, aggregate, and conditional. At the top of the list user has search, filter and sort options available to quickly find a function of interest. Hovering over a function in the list will bring up details of the function, from top to bottom: a summary of the format and input and output data types of the function, a description of what the function does, its input parameter(s), what it returns, and an example:

The next helper tab contains the Field Helper. This lists all the columns of the target table, sorted by their data type. Again, to quickly find the desired field, users can search, filter, and sort the list using the options at the top of the list:

The fourth tab is the Operator Helper, which lists several helpful numerical and string operators. This list can be searched too using the Search box at the top of the list:

There is another optional configuration section for Update tasks, the Condition section. In here, users can specify an expression to filter the target table on before applying the update(s) specified in the Update Statements section. This way, the updates are only applied to the subset of records that match the condition.
In this example, we will look at some records of the rawshipments table in the project sandbox of the same project (“Import Historical Shipments). We have opened this table in a grid and filtered for origin_dc Salt Lake City DC and destination_store CZ103.

What we want to do is update the “units” column and increase the values by 50% for the Table product. The Update Statements section shows that we set the units field to its current value multiplied by 1.5, which will achieve the 50% increase:

However, if we run the Update task as is, all values in the units field will be increased by 50%, for both the Table and the Chair product. To make sure we only apply this increase to the Table product, we configure the Condition section as follows:

The condition builder has the same function, field, and operator helper tabs at the bottom as the expression builder in the update statements section to enable users to quickly build their conditions. Building conditions works in the same way as building expressions.
Running the task and checking the updated rawshipments table for the same subset of records as we saw above, we can check that it worked as intended. The values in the units column for the Table records are indeed 1.5 times their original value, while the Chair units are unchanged.

It is important to note that opening tables in DataStar currently shows a preview of 10,000 records. When filtering a table by clicking on the filter icons to the right of a column name, only the resulting subset of records from those first 10,000 records will be included. While an Update task will be applied to all records in a table, due to this limit on the number of records in the preview you may not always be able to see (all) results of your Update task in the grid. In addition, an Update task can also change the order of the records in the table. This can lead to a filter showing a different set of records after running an update task as compared to the filtered subset that was shown prior to running it. Users can use the SQL Editor application on the Optilogic platform to see the full set of records for any tables.
Finally, if you want to apply multiple conditions you can use logical AND and OR statements to combine them in the Expression Builder. You would for example specify the condition as follows if you want to increase the units for the Table product by 50% only for the records where the origin_dc value is either “Dallas DC” or “Detroit DC”:

In this quick start guide we will show how users can seamlessly go from using the Resource Library, Cosmic Frog and DataStar applications on the Optilogic platform to creating visualizations in Power BI. The example covers cost to serve analysis using a global sourcing model. We will run 2 scenarios in this Cosmic Frog model with the goal to visualize the total cost difference between the scenarios by customer on a map. We do this by coloring the customers based on the cost difference.
The steps we will walk through are:
We will first copy the model named “Global Sourcing – Cost to Serve” from the Resource Library to our Optilogic account (learn more about the Resource Library in this help center article):

On the Optilogic platform, go to the Resource Library application by clicking on its icon in the list of applications on the left-hand side; note that you may need to scroll down. Should you not see the Resource Library icon here, then click on the icon with 3 horizontal dots which will then show all applications that were previously hidden too.
Now that the model is in the user’s account, it can be opened in the Cosmic Frog application:


We will only have a brief look at some high-level outputs in Cosmic Frog in this quick start guide, but feel free to explore additional outputs. You can learn more about Cosmic Frog through these help center articles. Let us have a quick look at the Optimization Network Summary output table and the map:


Our next step is to import the needed input table and output table of the Global Sourcing – Cost to Serve model into DataStar. Open the DataStar application on the Optilogic platform by clicking on its icon in the applications list on the left-hand side. In DataStar, we first create a new project named “Cost to Serve Analysis” and set up a data connection to the Global Sourcing – Cost to Serve model, which we will call “Global Sourcing C2S CF Model”. See the Creating Projects & Data Connections section in the Getting Started with DataStar help center article on how to create projects and data connections. Then, we want to create a macro which will calculate the increase/decrease in total cost by customer between the 2 scenarios. We build this macro as follows:

The configuration of the first import task, C2S Path Summary, is shown in this screenshot:

The configuration of the other import task, Customers, uses the same Source Data Connection, but instead of the optimizationcosttoservepathsummary table, we choose the customers table as the table to import. Again, the Project Sandbox is the Destination Data Connection, and the new table is simply called customers.
Instead of writing SQL queries ourselves to pivot the data in the cost to serve path summary table to create a new table where for each customer there is a row which has the customer name and the total cost for each scenario, we can use Leapfrog to do it for us. See the Leapfrog section in the Getting Started with DataStar help center article and this quick start guide on using natural language to create DataStar tasks to learn more about using Leapfrog in DataStar effectively. For the Pivot Total Cost by Scenario by Customer task, the 2 Leapfrog prompts that were used to create the task are shown in the following screenshot:

The SQL Script reads:
DROP TABLE IF EXISTS total_cost_by_customer_combined;
CREATE TABLE total_cost_by_customer_combined AS
SELECT
pathdestination AS customer,
SUM(CASE WHEN scenarioname = 'Baseline' THEN pathcost ELSE 0 END)
AS total_cost_baseline,
SUM(CASE WHEN scenarioname = 'OpenPotentialFacilities' THEN pathcost ELSE 0 END)
AS total_cost_openpotentialfacilities
FROM c2s_path_summary
WHERE scenarioname IN ('Baseline', 'OpenPotentialFacilities')
GROUP BY pathdestination
ORDER BY pathdestination;
To create the Calculate Cost Savings by Customer task, we gave Leapfrog the following prompt: “Use the total cost by customer table and add a column to calculate cost savings as the baseline cost minus the openpotentalfacilities cost”. The resulting SQL Script reads as follows:
ALTER TABLE total_cost_by_customer_combined
ADD COLUMN cost_savings DOUBLE PRECISION;
UPDATE total_cost_by_customer_combined
SET
cost_savings = total_cost_baseline - total_cost_openpotentialfacilities;
This task is also added to the macro; its name is "Calculate Cost Savings by Customer".
Lastly, we give Leapfrog the following prompt to join the table with cost savings (total_cost_by_customer_combined) and the customers table to add the coordinates from the customers table to the cost savings table: “Join the customers and total_cost_by_customer_combined tables on customer and add the latitude and longitude columns from the customers table to the total_cost_by_customer_combined table. Use an inner join and do not create a new table, add the columns to the existing total_cost_by_customer_combined table”. This is the resulting SQL Script, which was added to the macro as the "Add Coordinates to Cost Savings" task:
ALTER TABLE total_cost_by_customer_combined ADD COLUMN latitude VARCHAR;
ALTER TABLE total_cost_by_customer_combined ADD COLUMN longitude VARCHAR;
UPDATE total_cost_by_customer_combined SET latitude = c.latitude
FROM customers AS c
WHERE total_cost_by_customer_combined.customer = c.customername;
UPDATE total_cost_by_customer_combined SET longitude = c.longitude
FROM customers AS c
WHERE total_cost_by_customer_combined.customer = c.customername;We can now run the macro, and once it is completed, we take a look at the tables present in the Project Sandbox:

We will use Microsoft Power BI to visualize the change in total cost between the 2 scenarios by customer on a map. To do so, we first need to set up a connection to the DataStar project sandbox from within Power BI. Please follow the steps in the “Connecting to Optilogic with Microsoft Power BI” help center article to create this connection. Here we will just show the step to get the connection information for the DataStar Project Sandbox, which underneath is a PostgreSQL database (next screenshot) and selecting the table(s) to use in Power BI on the Navigator screen (screenshot after this one):

After selecting the connection within Power BI and providing the credentials again, on the Navigator screen, choose to use just the total_cost_by_customer_combined table as this one has all the information needed for the visualization:

We will set up the visualization on a map using the total_cost_by_customer_combined table that we have just selected for use in Power BI using the following steps:
With the above configuration, the map will look as follows:

Green customers are those where the total cost went down in the OpenPotentialFacilities scenario, i.e. there are savings for this customer. The darker the green, the higher the savings. White customers did not see a lot of difference in their total costs between the 2 scenarios. The one that is hovered over, in Marysville in Washington state, has a small increase of $149.71 in total costs in the OpenPotentialFacilities scenario as compared to the Baseline scenario. Red customers are those where the total cost went up in the OpenPotentialFacilities scenario (i.e. the cost savings are a negative number); the darker the red, the higher the increase in total costs. As expected, the customers with the highest cost savings (darkest green) are those located in Texas and Florida, as they are now being served from DCs closer to them.
To give users an idea of what type of visualization and interactivity is possible within Power BI, we will briefly cover the 2 following screenshots. These are of a different Cosmic Frog model for which a cost to serve analysis is performed too. Two scenarios were run in this model: Baseline DC and Blue Sky DC. In the Baseline scenario, customers are assigned to their current DCs and in the Blue Sky scenario, they can be re-assigned to other DCs. The chart on the top left shows the cost savings by region (= US state) that are identified in the Blue Sky DC scenario. The other visualizations on the dashboard are all on maps: the top right map shows the customers which are colored based on which DC serves them in the Baseline scenario, the bottom 2 maps shows the DCs used in the Baseline (left) and the DCs used in the Blue Sky scenario.

To drill into the differences between the 2 scenarios, users can expand the regions in the top left chart and select 1 or multiple individual customers. This is an interactive chart, and the 3 maps are then automatically filtered for the selected location(s). In the below screenshot, the user has expanded the NC region and then selected customer CZ_593_NC in the top left chart. In this chart, we see that the cost savings for this customer in the Blue Sky DC scenario as compared to the Baseline scenario amount to $309k. From the Customers map (top right) and Baseline DC map (bottom left) we see that this customer was served from the Chicago DC in the Baseline. We can tell from the Blue Sky DC map (bottom right) that this customer is re-assigned to be served from the Philadelphia DC in the Blue Sky DC scenario.

Optilogic has developed Python libraries to facilitate scripting for 2 of its flagship applications: Cosmic Frog, the most powerful supply chain design tool on the market, and DataStar, its just released AI-powered data product where users can create flexible, accessible and repeatable workflows with zero learning curve.
Instead of going into the applications themselves to build and run supply chain models and data workflows, these libraries enable users to programmatically access their functionality and underlying data. Example use cases for such scripts are:
In this documentation we cover the basics of getting yourself set up so you can take advantage of these Python scripting libraries, both on a local computer and on the Optilogic platform leveraging the Lightning Editor application. More specific details for the cosmicfrog and datastar libraries, including examples and end-to-end scripts, are detailed in the following Help Center articles and library specifications:
Working locally with Python scripts has the advantage that you can make use of code completion features which may include text auto-completion, showing what arguments functions need, catching incorrect syntax/names, etc. An example set up to achieve this is for example one where Python, Visual Studio Code, and an IntelliSense extension package for Python for Visual Studio Code are installed locally:
Once you are set up locally and are starting to work with Python scripts in Visual Studio Code, you will need to install the Python libraries you want to use to have access to their functionality. You do this by typing following in a terminal in Visual Studio Code (if no terminal is open yet: click on the View menu at the top and select Terminal, or the keyboard shortcut Ctrl + ` can be used):

When installing these libraries, multiple external libraries (dependencies) are installed too. These are required to run the packages successfully and/or make working with them easier. These include the optilogic, pandas, and SQLAlchemy packages (among others) for both libraries. You can find out which packages are installed with the cosmicfrog / ol-datastar libraries by typing “pip show cosmicfrog” or “pip show ol-datastar" in a terminal.
To use other Python libraries in addition, you will usually need to install them using “pip install” too before you can leverage them.
If you want to access certain items on the Optilogic platform (like Cosmic Frog models, DataStar project sandboxes) while working locally, you will need to whitelist your IP address on the platform, so the connections are not blocked by a firewall. You can do this yourself on the Optilogic platform:

Please note that for working with DataStar, the whitelisting of your IP address is only necessary if you want to access the Project Sandbox of projects directly through scripts. You do not need to whitelist your IP address to leverage other functions while scripting, like creating projects, adding macros and their tasks, and running macros.
App Keys are used to authenticate the user from the local environment on the Optilogic platform. To create an App Key, see this Help Center Article on Generating App and API Keys. Copy the generated App Key and paste it into an empty Notepad window. Save this file as app.key and place it in the same folder as your local Python script.
It is important to emphasize that App Keys and app.key files should not be shared with others, e.g. remove them from folders / zip-files before sharing. Individual users need to authenticate with their own App Key.
The next set of screenshots will show an example Python script named testing123.py on our local set-up. Here it uses the cosmicfrog library, using the ol-datastar library works similarly. The first screenshot shows a list of functions available from the cosmicfrog Python library:

When you continue typing after you have typed “model.” the code completion feature will auto-generate a list of functions you may be getting at. In the next screenshot ones that start with or contain a “g” as I have only typed a “g” so far. This list will auto-update the more you type. You can select from the list with your cursor or arrow up/down keys and hitting the Tab key to select and auto-complete:

When you have completed typing the function name and next type a parenthesis ‘(‘ to start entering arguments, a pop-up will come up which contains information about the function and its arguments:

As you type the arguments for the function, the argument that you are on and the expected format (e.g. bool for a Boolean, str for string, etc.) will be in blue font and a description of this specific argument appears above the function description (e.g. above box 1 in the above screenshot). In the screenshot above we are on the first argument input_only which requires a Boolean as input and will be set to False by default if the argument is not specified. In the screenshot below we are on the fourth argument (original_names) which is now in blue font; its default is also False, and the argument description above the function description has changed now to reflect the fourth argument:

Once you are ready to run a script, you can click on the play button at the top right of the screen:

As mentioned above, you can also use the Lightning Editor application on the Optilogic platform to create and run Python scripts. Lightning Editor is an Integrated Development Environment (IDE) which has some code completion features, but these are not as extensive and complete as those in Visual Studio Code when used with an IntelliSense extension package.
When working on the Optilogic platform, you are already authenticated as a user, and you do not need to generate / provide an App Key or app.key file nor whitelist your IP address.
When using the datastar library in scripts, users need to place a requirements.txt file in the same folder on the Optilogic platform as the script. This file should only contain the text “ol-datastar” (without the quotes). No requirements.txt files is required when using the cosmicfrog library.
The following simple test.py Python script on Lightning Editor will print the first Hopper output table name and its column names:



DataStar users can take advantage of the datastar Python library, which gives users access to DataStar projects, macros, tasks, and connections through Python scripts. This way users can build, access, and run their DataStar workflows programmatically. The library can be used in a user’s own Python environment (local or on the Optilogic platform), and it can also be used in Run Python tasks in a DataStar macro.
In this documentation we will cover how to use the library through multiple examples. At the end, we will step through an end-to-end script that creates a new project, adds a macro to the project, and creates multiple tasks that are added to the macro. The script then runs the macro while giving regular updates on its progress.
Before diving into the details of this article, it is recommended to read this “Setup: Python Scripts for Cosmic Frog and DataStar” article first; it explains what users need to do in terms of setup before they can run Python scripts using the datastar library. To learn more about the DataStar application itself, please see these articles on Optilogic’s Help Center.
Succinct documentation in PDF format of all datastar library functionality can be downloaded here (please note that the long character string at the beginning of the filename is expected). This includes a list of all available properties and methods for the Project, Macro, Task, and Connection classes at the end of the document.
All Python code that is shown in the screenshots throughout this documentation is available in the Appendix, so that you can copy-paste from there if you want to run the exact same code in your own Python environment and/or use these as jumping off points for your own scripts.
If you have reviewed the “Setup: Python Scripts for Cosmic Frog and DataStar” article and are set up with your local or online Python environment, we are ready to dive in! First, we will see how we can interrogate existing projects and macros using Python and the datastar library. We want to find out which DataStar projects are already present in the user’s Optilogic account.


Once the parentheses are typed, hover text comes up with information about this function. It tells us that the outcome of this method will be a list of strings, and the description of the method reads “Retrieve all project names visible to the authenticated user”. Most methods will have similar hover text describing the method, the arguments it takes and their default values, and the output format.
Now that we have a variable that contains the list of DataStar projects in the user account, we want to view the value of this variable:

Next, we want to dig one level deeper and for the “Import Historical Shipments” project find out what macros it contains:

Finally, we will retrieve the tasks this “Import Shipments” macro contains in a similar fashion:

In addition, we can have a quick look in the DataStar application to see that the information we are getting from the small scripts above matches what we have in our account in terms of projects (first screenshot below), and the “Import Shipments” macro plus its tasks in the “Import Historical Shipments” project (second screenshot below):


Besides getting information about projects and macros, other useful methods for projects and macros include:
Note that when creating new objects (projects, macros, tasks or connections) these are automatically saved. If existing objects are modified, their changes need to be committed by using the save method.
Macros can be copied, either within the same project or into a different project. Tasks can also be copied, either within the same macro, between macros in the same project, or between macros of different projects. If a task is copied within the same macro, its name will automatically be suffixed with (Copy).
As an example, we will consider a macro called “Cost Data” in a project named “Data Cleansing and Aggregation NA Model”, which is configured as follows:

The North America team shows this macro to their EMEA counterparts who realize that they could use part of this for their purposes, as their transportation cost data has the same format as that of the NA team. Instead of manually creating a new macro with new tasks that duplicate the 3 transportation cost related ones, they decide to use a script where first the whole macro is copied to a new project, and then the 4 tasks which are not relevant for the EMEA team are deleted:

After running the script, we see in DataStar that there is indeed a new project named “Data Cleansing and Aggregation EMEA” which has a “Cost Data EMEA” macro that contains the 3 transportation cost related tasks that we wanted to keep:

Note that another way we could have achieved this would have been to copy the 3 tasks from the macro in the NA project to the new macro in the EMEA project. The next example shows this for one task. Say that after the Cost Data EMEA macro was created, the team finds they also have a use for the “Import General Ledger” task that was deleted as it was not on the list of “tasks to keep”. In an extension of the previous script or a new one, we can leverage the add_task method of the Macro class to copy the “Import General Ledger” task from the NA project to the EMEA one:

After running the script, we see that the “Import General Ledger” task is now part of the “Cost Data EMEA” macro and is connected to the Start task:

Several additional helpful features on chaining tasks together in a macro are:
DataStar connections allow users to connect to different types of data sources, including CSV-files, Excel files, Cosmic Frog models, and Postgres databases. These data sources need to be present on the Optilogic platform (i.e. visible in the Explorer application). They can then be used as sources / destinations / targets for tasks within DataStar.
We can use scripts to create data connections:

After running this script, we see the connections have been created. In the following screenshot, the Explorer is on the left, and it shows the Cosmic Frog model “Global Supply Chain Strategy.frog” and the Shipments.csv file. The connections using these are listed in the Data Connections tab of DataStar. Since we did not specify any description, an auto-generated description “Created by the Optilogic Datastar library” was added to each of these 2 connections:

In addition to the connections shown above, data connections to Excel files (.xls and .xlsx) and PostgreSQL databases which are stored on the Optilogic platform can be created too. Use the ExcelConnection and OptiConnection classes to set up such these types of connections up.
Each DataStar project has its own internal data connection, the project sandbox. This is where users perform most of the data transformations after importing data into the sandbox. Using scripts, we can access and modify data in this sandbox directly instead of using tasks in macros to do so. Note that if you have a repeatable data workflow in DataStar which is used periodically to refresh a Cosmic Frog model where you update your data sources and re-run your macros, you need to be mindful of making one-off changes to the project sandbox through a script. When you change data in the sandbox through a script, macros and tasks are not updated to reflect these modifications. When running the data workflow the next time, the results may be different if that change the script made is not made again. If you want to include such changes in your macro, you can add a Run Python task to your macro within DataStar.
Our “Import Historical Shipments” project has a table named customers in its project sandbox:

To make the customers sort in numerical order of their customer number, our goal in the next script is to update the number part of the customer names with left padded 0’s so all numbers consist of 4 digits. And while we are at it, we are also going to replace the “CZ” prefix with a “Cust_” prefix.
First, we will show how to access data in the project sandbox:

Next, we will use functionality of the pandas Python library (installed as a dependency when installing the datastar library) to transform the customer names to our desired Cust_xxxx format:

As a last step, we can now write the updated customer names back into the customers table in the sandbox. Or, if we want to preserve the data in the sandbox, we can also write to a new table as is done in the next screenshot:

We use the write_table method to write the dataframe with the updated customer names into a new table called “new_customers” in the project sandbox. After running the script, opening this new table in DataStar shows us that the updates worked:

Finally, we will put everything we have covered above together in one script which will:
We will look at this script through the next set of screenshots. For those who would like to run this script themselves, and possibly use it as a starting point to modify into their own script:


Next, we will create 7 tasks to add to the “Populate 3 CF Model Tables” macro, starting with an Import task:

Similar to the “create_dc_task” Run SQL task, 2 more Run SQL tasks are created to create unique customers and aggregated customer demand from the raw_shipments table:

Now that we have generated the distribution_centers, customers, and customer_demand tables in the project sandbox using the 3 SQL Run tasks, we want to export these tables into their corresponding Cosmic Frog tables (facilities, customers, and customerdemand) in the empty Cosmic Frog model:

The following 2 Export tasks are created in a very similar way:


This completes the build of the macro and its tasks.
If we run it like this, the tasks will be chained in the correct way, but they will be displayed on top of each other on the Macro Canvas in DataStar. To arrange them nicely and prevent having to reposition them manually in the DataStar UI, we can use the “x” and “y” properties of tasks. Note that since we are now changing existing objects, we need to use the save method to commit the changes:

In the green outlined box, we see that the x-coordinate on the Macro Canvas for the import_shipments_task is set to 250 (line 147) and its y-coordinate to 150 (line 148). In line 149 we use the save method to persist these values.
Now we can kick off the macro run and monitor its progress:

While the macro is running, messages written to the terminal by the wait_for_done method will look similar to following:

We see 4 messages where the status was “processing” and then a final fifth one stating the macro run has completed. Other statuses one might see are pending when the macro has not yet started and errored in case the macro could not finish successfully.
Opening the DataStar application, we can check the project and CSV connection were created on the DataStar startpage. They are indeed there, and we can open the “Scripting with DataStar” project to check the “Populate 3 CF Model Tables” macro and the results of its run:

The macro contains the 7 tasks we expect and checking their configurations shows they are set up the way we intended to.
Next, we have a look at the Data Connections tab to see the results of running the macro:

Here follows the code of each of the above examples. You can copy and paste this into your own scripts and modify them to your needs. Note that whenever names and paths are used, you may need to update these to match your own environment.
Get list of DataStar projects in user's Optilogic account and print list to terminal:
from datastar import *
project_list = Project.get_projects()
print(project_list)
Connect to the project named "Import Historical Shipments" and get the list of macros within this project. Print this list to the terminal:
from datastar import *
project = Project.connect_to("Import Historical Shipments")
macro_list = project.get_macros()
print(macro_list)
In the same "Import Historical Shipments" project, get the macro named "Import Shipments", and get the list of tasks within this macro. Print the list with task names to the terminal:
from datastar import *
project = Project.connect_to("Import Historical Shipments")
macro = project.get_macro("Import Shipments")
task_list = macro.get_tasks()
print(task_list)
Copy 3 of the 7 tasks in the "Cost Data" macro in the "Data Cleansing and Aggregation NA Model" project to a new macro "Cost Data EMEA" in a new project "Data Cleansing and Aggregation EMEA". Do this by first copying the whole macro and then removing the tasks that are not required in this new macro:
from datastar import *
# connect to project and get macro to be copied into new project
project = Project.connect_to("Data Cleansing and Aggregation NA Model")
macro = project.get_macro("Cost Data")
# create new project and clone macro into it
new_project = Project.create("Data Cleansing and Aggregation EMEA")
new_macro = macro.clone(new_project,name="Cost Data EMEA",
description="Cloned from NA project; \
keep 3 transportation tasks")
# list the transportation cost related tasks to be kept and get a list
# of tasks present in the copied macro in the new project, so that we
# can determine which tasks to delete
tasks_to_keep = ["Start",
"Import Transportation Cost Data",
"Cleanse TP Costs",
"Aggregate TP Costs by Month"]
tasks_present = new_macro.get_tasks()
# go through tasks present in the new macro and
# delete if the task name is not in the "to keep" list
for task in tasks_present:
if task not in tasks_to_keep:
new_macro.delete_task(task)
Copy specific task "Import General Ledger" from the "Cost Data" macro in the "Data Cleansing and Aggregation NA Model" project to the "Cost Data EMEA" macro in the "Data Cleansing and Aggregation EMEA" project. Chain this copied task to the Start task:
from datastar import *
project_1 = Project.connect_to("Data Cleansing and Aggregation NA Model")
macro_1 = project_1.get_macro("Cost Data")
project_2 = Project.connect_to("Data Cleansing and Aggregation EMEA")
macro_2 = project_2.get_macro("Cost Data EMEA")
task_to_copy = macro_1.get_task("Import General Ledger")
start_task = macro_2.get_task("Start")
copied_task = macro_2.add_task(task_to_copy,
auto_join=False,
previous_task=start_task)
Creating a CSV file connection and a Cosmic Frog Model connection:
from datastar import *
shipments = DelimitedConnection(
name="Shipment Data",
path="/My Files/DataStar/Shipments.csv",
delimiter=","
)
cf_global_sc_strategy = FrogModelConnection(
name="Global SC Strategy CF Model",
model_name="Global Supply Chain Strategy"
)
Connect directly to a project's sandbox, read data into a pandas dataframe, transform it, and write the new dataframe into a new table "new_customers":
from datastar import *
# connect to project and get its sandbox
project = Project.connect_to("Import Historical Shipments")
sandbox = project.get_sandbox()
# use pandas to raed the "customers" table into a dataframe
df_customers = sandbox.read_table("customers")
# copy the dataframe into a new dataframe
df_new_customers = df_customers
# use pandas to change the customername column values format
# from CZ1, CZ20, etc to Cust_0001, Cust_0020, etc
df_new_customers['customername'] = df_new_customers['customername'].map(lambda x: x.lstrip('CZ'))
df_new_customers['customername'] = df_new_customers['customername'].str.zfill(4)
df_new_customers['customername'] = 'Cust_' + df_new_customers['customername']
# write the updates customers table with the new customername
# values to a new table "new_customers"
sandbox.write_table(df_new_customers, "new_customers")
End-to-end script - create a new project and add a new macro to it; add 7 tasks to the macro to import shipments data; create unique customers, unique distribution centers, and demand aggregated by customer and product from it. Then export these 3 tables to a Cosmic Frog model:
from datastar import *
#------------------------------------
# Create new project and add macro
#------------------------------------
project = Project.create("Scripting with DataStar",
description= "Show how to use a Python script to "
"create a DataStar project, add connections, create "
"a macro and its tasks, and run the macro.")
macro = project.add_macro(name="Populate 3 CF Model Tables")
#--------------------
# Get & set up connections
#--------------------
sandbox = project.get_sandbox()
cf_model = Connection.get_connection("Cosmic Frog Model")
shipments = DelimitedConnection(
name="May2024-Sept2025 Shipments",
path="/My Files/DataStar/shipments.csv",
delimiter=",")
#-----------------------
# Create tasks
#-----------------------
# Import Task to import the raw shipments from the shipments CSV connection
# into a table named raw_shipments in the project sandbox
import_shipments_task = macro.add_import_task(
name="Import historical shipments",
source_connection=shipments,
destination_connection=sandbox,
destination_table="raw_shipments")
# Add 3 run SQL tasks to create unique DCs, unique Customers, and Customer
# Demand (aggregated by customer and product from July 2024-June 2025)
# from the raw shipments data.
create_dc_task = macro.add_run_sql_task(
name="Create DCs",
connection=sandbox,
query="""
CREATE TABLE IF NOT EXISTS distribution_centers AS
SELECT DISTINCT origin_dc AS dc_name,
AVG(origin_latitude) AS dc_latitude,
AVG(origin_longitude) AS dc_longitude
FROM raw_shipments
GROUP BY dc_name;""")
create_cz_task = macro.add_run_sql_task(
name="Create customers",
connection=sandbox,
query="""
CREATE TABLE IF NOT EXISTS customers AS
SELECT DISTINCT destination_store AS cust_name,
AVG(destination_latitude) AS cust_latitude,
AVG(destination_longitude) AS cust_longitude
FROM raw_shipments
GROUP BY cust_name;""",
auto_join=False,
previous_task=import_shipments_task)
create_demand_task = macro.add_run_sql_task(
name="Create customer demand",
connection=sandbox,
query="""
CREATE TABLE IF NOT EXISTS customer_demand AS
SELECT destination_store AS cust_name,
productname,
SUM(units) AS demand_quantity
FROM raw_shipments
WHERE TO_DATE(ship_date, 'DD/MM/YYYY') BETWEEN
'2024-07-01' AND '2025-06-30'
GROUP BY cust_name, productname;""",
auto_join=False,
previous_task=import_shipments_task)
# Add 3 export tasks to populate the Facilities, Customers,
# and CustomerDemand tables in empty CF model connection
export_dc_task = macro.add_export_task(
name="Export distribution centers",
source_connection=sandbox,
source_table="distribution_centers",
destination_connection=cf_model,
destination_table="facilities",
destination_table_type="existing",
destination_table_action="replace",
mappings=[{"sourceType":"text","targetType":"text",
"sourceColumn":"dc_name","targetColumn":"facilityname"},
{"sourceType":"number","targetType":"text",
"sourceColumn":"dc_latitude","targetColumn":"latitude"},
{"sourceType":"number","targetType":"text",
"sourceColumn":"dc_longitude","targetColumn":"longitude"}],
auto_join=False,
previous_task=create_dc_task)
export_cz_task = macro.add_export_task(
name="Export customers",
source_connection=sandbox,
source_table="customers",
destination_connection=cf_model,
destination_table="customers",
destination_table_type="existing",
destination_table_action="replace",
mappings=[{"sourceType":"text","targetType":"text",
"sourceColumn":"cust_name","targetColumn":"customername"},
{"sourceType":"number","targetType":"text",
"sourceColumn":"cust_latitude","targetColumn":"latitude"},
{"sourceType":"number","targetType":"text",
"sourceColumn":"cust_longitude","targetColumn":"longitude"}],
auto_join=False,
previous_task=create_cz_task)
export_demand_task = macro.add_export_task(
name="Export customer demand",
source_connection=sandbox,
source_table="customer_demand",
destination_connection=cf_model,
destination_table="customerdemand",
destination_table_type="existing",
destination_table_action="replace",
mappings=[{"sourceType":"text","targetType":"text",
"sourceColumn":"cust_name","targetColumn":"customername"},
{"sourceType":"text","targetType":"text",
"sourceColumn":"productname","targetColumn":"productname"},
{"sourceType":"number","targetType":"text",
"sourceColumn":"demand_quantity","targetColumn":"quantity"}],
auto_join=False,
previous_task=create_demand_task)
#--------------------------------
# Position tasks on Macro Canvas
#--------------------------------
import_shipments_task.x = 250
import_shipments_task.y = 150
import_shipments_task.save()
create_dc_task.x = 500
create_dc_task.y = 10
create_dc_task.save()
create_cz_task.x = 500
create_cz_task.y = 150
create_cz_task.save()
create_demand_task.x = 500
create_demand_task.y = 290
create_demand_task.save()
export_dc_task.x = 750
export_dc_task.y = 10
export_dc_task.save()
export_cz_task.x = 750
export_cz_task.y = 150
export_cz_task.save()
export_demand_task.x = 750
export_demand_task.y = 290
export_demand_task.save()
#-----------------------------------------------------
# Run the macro and write regular progress updates
#-----------------------------------------------------
macro.run()
macro.wait_for_done(verbose=True)When demand fluctuates due to for example seasonality, it can be beneficial to manage inventory dynamically. This means that when the demand (or forecasted demand) goes up or down, the inventory levels go up or down accordingly. To support this in Cosmic Frog models, inventory policies can be set up in terms of days of supply (DOS): for example for the (s,S) inventory policy, the Simulation Policy Value 1 UOM and Simulation Policy Value 2 UOM fields can be set to DOS. Say for example that reorder point s and order up to quantity S are set to 5 DOS and 10 DOS, respectively. This means that if the inventory falls to or below the level that is the equivalent of 5 days of supply, a replenishment order is placed that will order the amount of inventory to bring the level up to the equivalent of 10 days of supply. In this documentation we will cover the DOS-specific inputs on the Inventory Policies table, how a day of supply equivalent in units is calculated from these and walk through a numbers example.
In short, using DOS lets users be flexible with policy parameters; it is a good starting point for estimating/making assumptions about how inventory is managed in practice.
Note that it is recommended you are familiar with the Inventory Policies table in Cosmic Frog already before diving into the details of this help article.
The following screenshot shows the fields that set the simulation inventory policy and its parameters:

For the same inventory policy, the next screenshot shows the DOS-related fields on the Inventory Policies table; note that the UOM fields are omitted in this screenshot:

As mentioned above, when using forecasted demand for the DOS calculations, this forecasted demand needs to be specified in the User Defined Forecasts Data and User Defined Forecasts tables, which we will discuss here. This next screenshot shows the first 15 example records in the User Defined Forecasts Table:

Next, the User Defined Forecasts table lets a user configure the time-period to which a forecast is aggregated:

Let us now explain how the DOS calculations work for different DOS settings through the examples shown in the next screenshot. Note that for all these examples the DOS Review Period First Time field has been left blank, meaning that the first 1 DOS equivalent calculation occurs at the start of this model (on January 1st) for each of these examples:

Now that we know how to calculate the value of 1 DOS, we can apply this to inventory policies which use DOS as their UOM for the simulation policy value fields. We will do a numbers example with the one shown in the screenshot above (in the Days of Supply Settings section) where reorder point s is 5 DOS and order up to quantity S is 10 DOS. Let us assume the same settings as in the last example for the 1 DOS calculations in the screenshot above, explained in bullet #6 above: forecasted demand is used with a 10 day DOS Window, a 5 day DOS Leadtime, and a 5 day DOS Review Period, so the calculations for the equivalent of 1 DOS are the numbers in the last row shown in the screenshot, which we will use in our example below. In addition to this, we will assume a 2 day Review Period for the inventory policy, meaning inventory levels are checked every other day to see if a replenishment order needs to be placed. DC_1 also has 1,000 units of P1 on hand at the start of the simulation (specified in the Initial Inventory field):

Cosmic Frog’s new breakpoints feature enables users to create maps which relay even more supply chain data in just one glance. Lines and points can now be styled based on field values from the underlying input or output table the lines/points are drawn from.
In this Help Center article, we will cover where to find the breakpoints feature for both point and line layers and how to configure them. A basic knowledge of how to configure maps and their layers in Cosmic Frog is assumed; users unfamiliar with maps in Cosmic Frog are encouraged to first read the “Getting Started with Maps” Help Center article.
First, we will walk through how to apply breakpoints to map layers of type = line, which are often used to show flows between locations. With breakpoints we can style the lines between origins and destinations for example based on how much is flowing in terms of quantity, volume or weight. It is also possible to style the lines on other numeric fields, like costs, distances or time.
Consider the following map showing flows (dark green lines) to customers (light green circles):

Next, we will go to the Layer Style pane on which breakpoints can be turned on and configured:

Once the Breakpoints toggle has been turned on (slide right, the color turns blue), the breakpoint configuration options become visible:

One additional note is that one can use tab to navigate through the cells in the Breakpoints table.
The next screenshot shows breakpoints based on the Flow Quantity field (in the Optimization Flow Summary) for which the Max Values have been auto generated:


Users can customize the style of each individual breakpoint:

Please note:
Configuring and applying breakpoints on point layers is very similar to those on line layers. We will walk through the steps in the next 4 screenshots in slightly less detail. In this example we will base the size of the customer locations on the map on the total demand they have been served:

Next, we again look at the Layer Style pane of the layer:


Lastly, user would like to gradually increase the color of the customer circles from light to dark green and the size from small to bigger based on the breakpoint the customer belongs to:

As always, please feel free to reach out to Optilogic support at support@optilogic.com should you have any questions.
For various reasons, many supply chains need to deal with returns. This can for example be due to packaging materials coming back to be reused at plants or DCs, retail customers returning finished goods that they are not happy with, defective products, etc. Previously, these returns could mostly be modelled within Cosmic Frog NEO (Network Optimization) models by using some tricks and workarounds. But with the latest Cosmic Frog release, returns are now supported natively, so that the reuse, repurposing, or recycling of these retuned products to help companies reduce costs, minimize waste, and improve overall supply chain efficiency can be taken into account easily.
This documentation will first provide an overview of how returns work in a Cosmic Frog model and then walk through an example model of a retailer which includes modelling the returns of finished goods. The appendix details all the new returns-related fields in several new tables and some of the existing tables.
When modelling returns in Cosmic Frog:
Users need to use 2 new input tables to set up returns:

The Return Ratios table contains the information on how much return-product is returned for a certain amount of product delivered to a certain destination:

The Return Policies table is used to indicate where returned products need to go to and the rules around multiple possible destinations. Optionally, costs can be associated with the returns here and a maximum distance allowed for returns can be entered on this table too.

Note that both these tables have Status and Notes fields (not shown in the screenshots), like most Cosmic Frog input tables have. These are often used for scenario creation where the Status is set to Exclude in the table itself and changed to Include in select scenarios based on text in the Notes field.
All columns on these 2 returns-related input table are explained in more detail in the appendix.
In addition to populating the Return Policies and Return Ratios tables, users need to be aware that additional model structure needed for the returned products may need to be put in place:
The Optimization Return Summary output table is a new output table that will be generated for Neo runs if returns are included in the modelling:

This table and all its fields are explained in detail in the appendix.
The Optimization Flow Summary output table will contain additional records for models that include returns; they can be identified by filtering the Flow Type field for “Return”:

These 2 records show the return flows and associated transportation costs for the Bag_1 and Bag_2 products from CZ_001, going to DC_Cincinnati, that we saw in the Optimization Return Summary table screenshot above.
In addition to the new Optimization Return Summary output table, and new records of Flow Type = Return in the Optimization Flow Summary output table, following existing output tables now contain additional fields related to returns:
The example Returns model can be copied from the Resource Library to a user’s Optilogic account (see this help center article on how to use the Resource Library). It models the US supply chain of a fashion bag retailer. The model’s locations and flows both to customers and between DCs are shown in this screenshot (returns are not yet included here):

Historically, the retailer had 1 main DC in Cincinnati, Ohio, where all products were received and all 869 customers were fulfilled from. Over time, 4 secondary DCs were added based on Greenfield analysis, 2 bigger ones in Clovis, California, and Jersey City, New Jersey, and 2 smaller ones in West Palm Beach, Florida, and Las Lomas, Texas. These secondary DCs receive product from the Cincinnati DC and serve their own set of customers. The main DC in Cincinnati and the 2 bigger secondary DCs (Clovis, CA, and Jersey City, NJ) can handle returns currently: returns are received there and re-used to fulfill demand. However, until now, these returns had not been taken into account in the modelling. In this model we will explore following scenarios:
Other model features:
Please note that in this model the order of columns in the tables has sometimes been changed to put those containing data together on the left-hand side of the table. All columns are still present in the table but may be in a different position than you are used to. Columns can be reset to their default position by choosing “Reset Columns” from the menu that comes up when clicking on the icon with 3 vertical dots to the right of a column name.
After running the baseline scenario (which does not include returns), we take a look at the Financials: Scenario Cost Comparison chart in the Optimization Scenario Comparison dashboard (in Cosmic Frog’s Analytics module):

We see that the biggest cost currently is the production cost at 394.7M (= procurement of all product into Cincinnati), followed by transportation costs at 125.9M. The total supply chain cost of this scenario is 625.3M.
In this scenario we want to include how returns currently work: Cincinnati, Clovis, and Jersey City customers return their products to their local DCs whereas West Palm Beach and Las Lomas customers return their products to the main DC in Cincinnati. To set this up, we need to add records to the Return Policies, Return Ratios, and Transportation Policies input tables. To not change the Baseline scenario, all new records will be added with Status = Exclude, and the Notes field populated so it can be used to filter on in scenario items that change the Status to Include for subsets of records. Starting with the Return Policies table:

Next, we need to add records to the Transportation Policies table so that there is at least 1 lane available for each site-product-destination combination set up in the return policies table. For this example, we add records to the Transportation Policies table that match the ones added to the Return Policies table exactly, while additionally setting Mode Name = Returns, Unit Cost = 0.04 and Unit Cost UOM = EA-MI (the latter is not shown in the screenshot below), which means the transportation cost on return lanes is 0.04 per unit per mile:

Finally, we also need to indicate how much product is returned in the Return Ratios table. Since we want to model different ratios by individual customer and individual product, this table does not use any groups. Groups can however be used in this table too for the Site Name, Product Name, Period Name, and Return Product Name fields.

Please note that adding records to these 3 tables and including them in the scenarios is sufficient to capture returns in this example model. For other models it is possible that additional tables may need to be used, see the Other Input Tables section above.
Now that we have populated the input tables to capture returns, we can set up scenario S1 which will change the Status of the appropriate records in these tables from Exclude to Include:

After running this scenario S1, we are first having a look at the map, where we will be showing the DCs, Customers and the Return Flows for scenario S1. This has been set up in the map named Supply Chain (S1) in the model from the Resource Library. To set this map up, we first copied the Supply Chain (Baseline) map and renamed it to Supply Chain (S1). Then clicked on the map’s name (Supply Chain (S1)) to open it and in the Map Filters form that is showing on the right-hand side of the screen changed the scenario to “S1 Include Returns” in the Scenario drop-down. To configure the Return Flows, we added a new Map Layer, and configured its Condition Builder form as follows (learn more about Maps and how to configure them in this Help Center article):

The resulting map is shown in this next screenshot:

We see that, as expected, the bulk of the returns are going back the main DC in Cincinnati: from its local customers, but also from the customers served by the 2 smaller DCs in Las Lomas and West Palm Beach DCs. The customers served by the Clovis and Jersey City DCs return their products to their local DCs.
To assess the financial impact of including returns in the model, we again look at the Financials: Scenario Cost Comparison chart in the Optimization Scenario Comparison dashboard, comparing the S1 scenario to the Baseline scenario:

We see that including returns in S1 leads to:
Seeing that the main driver for the overall supply chain costs being higher when including returns are the high transportation costs for returning products, especially those travelling long distances from the Las Lomas and West Palm Beach customers to the Cincinnati DC sparks the idea to explore if it would be more beneficial for the Las Lomas and/or West Palm Beach customers to return their products to their local DC, rather than the Cincinnati DC. This will be modelled in the next three scenarios.
Building upon scenario S1, we will run 2 scenarios (S2 and S3) where it will be examined if it is beneficial cost-wise for West Palm Beach customers to return their products to their local West Palm Beach DC (S2) and for Las Lomas customers to return their products to their local Las Lomas DC (S3) rather than to the Cincinnati DC. In order to be able to handle returns, the fixed operating costs at these DCs are increased by 0.5M to 3.5M:

Scenarios S2 and S3 are run, and first we look at the map to check the return flows for the West Palm Beach and Las Lomas customers, respectively (copied the map for S1, renamed it, and then changed the scenario by clicking on the map’s name and selecting the S2/S3 scenario from the Scenario drop-down in the Map Filters pane on the right-hand side):


As expected, due to how we set up these scenarios, now all returns from these customers go to their local DC, rather than to DC-Cincinnati which was the case in scenario S1.
Let us next look at the overall costs for these 2 scenarios and compare them back to the S1 and Baseline scenarios:

Besides some smaller reductions in the inbound and outbound costs in S2 and S3 as compared to S1, the transportation costs are reduced by sizeable amounts: 6.9M (S2 compared to S1) and 9.4M (S3 compared to S1), while the production (= procurement) costs are the same across these 3 scenarios. The reduction in transportation costs outweighs the 0.5M increase in fixed operating costs to be able to handle returns at the West Palm Beach and Las Lomas DCs. Also note that both scenario S2 and S3 have a lower total cost than the Baseline scenario.
Since it is beneficial to have the West Palm Beach and Las Lomas DCs handle returns, scenario S4 where this capability is included for both DCs is set up and run:

The S4 scenario increases the fixed operating costs at both these DCs from 3M to 3.5M (scenario items “Incr Operating Cost S2” and “Incr Operating Cost S3”), sets the Status of all records on the Return Ratios table to Include (the Include Return Ratios scenario item), and sets the Status to Include for records on the Return Policies and Transportation Policies tables where the Notes field contains the text “S4” (the “Include Return Policies S4” and “Include Return TPs S4” items), which are records where customers all ship their returns back to their local DC. We first check on the map if this is working as expected after running the S4 scenario:

We notice that indeed there are no more returns going back to the Cincinnati DC from Las Lomas or West Palm Beach customers.
Finally, we expect the costs of this scenario to be the lowest overall since we should see the combined reductions of scenarios S2 and S3:

Between S1 and S4:
In addition to looking at maps or graphs, users can also use the output tables to understand the overall costs and flows, including those of the returns included in the network.
Often, users will start by looking at the overall cost picture using the Optimization Network Summary output table, which summarizes total costs and quantities at the scenario level:

For each scenario, we are showing the Total Supply Chain Cost and Total Return Quantity fields here. As mentioned, the Baseline did not include any returns, whereas scenarios S1-4 did, which is reflected in the Total Return Quantity values. There are many more fields available on this output table, but in the next screenshot we are just showing the individual cost buckets that are used in this model (all other cost fields are 0):

How these costs increase/decrease between scenarios has been discussed above when looking at the “Financials: Scenario Cost Comparison” chart in the “Optimization Scenario Comparison” dashboard. In summary:
Please note that on this table, there is also a Total Return Cost field. It is 0 in this example model. It would be > 0 if the Unit Cost field on the Return Policies table had been populated, which is a field where any specific cost related to the return can be captured. In our example Returns model, the return costs are entirely captured by the transportation costs and fixed operating costs specified.
The Optimization Return Summary output table is a new output table that has been added to summarize returns at the scenario-returning site-product-return product-period level:

Looking at the first record here, we understand that in the S1 scenario, CZ_001 was served 8,850 units of Bag_1, while 876.15 units of Bag_1 were returned.
Lastly, we can also see individual return flows in the Optimization Flow Summary table by filtering the Flow Type field for “Return”:

Note that the product name for these flows is of the product that is being returned.
The example Returns model described above assumes that 100% of the returned Bag_1 and Bag_2 products can be reused. Here we will discuss through screenshots how the model can be adjusted to take into account that only 70% of Bag_1 returns and 50% of Bag_2 returns can be reused. To achieve this, we will need to add an additional “return” product for each finished good, set up bills of materials, and add records to the policies tables for the required additional model structure.
The tables that will be updated and for which we will see a screenshot each below are: Products, Groups, Return Policies, Return Ratios, Transportation Policies, Warehousing Policies, Bills of Materials, and Production Policies.
Two products are added here, 1 for each finished good: Bag_1_Return and Bag_2_Return. This way we can distinguish the return product from the sellable finished goods, apply different policies/costs to them, and convert a percentage back into the sellable items. The naming convention of adding “_Return” to the finished good name makes for easy filtering and provides clarity around what the product’s role is in the model. Of course, users can use different naming conventions.
The same unit value as for the finished goods is used for the return products, so that inventory carrying cost calculations are consistent. A unit price (again, same as the finished goods) has been entered too, but this will not actually be used by the model as these “_Return” products are not used to serve customer demand.

To facilitate setting up policies where the return products behave the same (e.g. same lanes, same costs, etc.), we add an “All_Return_Products” group to the Groups table, which consists of the 2 return products:

In the Return Policies table, the Return Product Name column needs to be updated to reflect that the products that are being returned are the “_Return” products. Previously, the Return Product Name was set to the All_Products group for each record, and it is now updated to the All_Return_Products group. Updating a field in all records or a subset of filtered records to the same value can be done using the Bulk Update Column functionality, which can be accessed by clicking on the icon with 3 vertical dots to the right of the column name and then choosing “Bulk Update this Column” in the list of options that comes up.

We keep the ratios of how much product comes back for each unit of Bag_1 / Bag_2 sold the same, however we need to update the Return Product Name field on all records to reflect that it is the “_Return” product that comes back. Since this table does not use groups because the return ratios are different for different customer-finished good combinations, the best way to update this table is to also use the bulk update column functionality:
Note that only 4 of the 1,738 records in this table are shown in the screenshot below.

Here, the records representing the lane back from the customers to the DC they send returns back to need to be updated so that the products going back are the “_Return” ones. Since the transportation costs of the return products are the same, we can keep using the grouped policies and just bulk update the Product Name column of the records where Mode Name equals Returns: change the values from the All_Products group to the All_Return_Products group.

We want to apply the same inbound and outbound handling costs for the return products as we do for the finished goods, so a record is added for the “All_Return_Products” group at All_DCs in the Warehousing Policies table:

We can use the Bills of Materials (BOM) table to convert the “_Return” products back into the finished goods, applying the desired percentage that will be suitable for reuse. For Bag_1, we want to set up that 70% of the returns can be reused as finished goods, this is done by setting up a BOM as follows (the first 2 records in the screenshot below):
Similarly, we set up the BOM “Reuse_Bag_2” where 1 unit of Bag_2_Return results in 0.5 units of Bag_2 (the 3rd and 4th record in the screenshot):

For the BOMs to be used, they need to be associated with the appropriate location-product combinations through production policies. So, we add 2 records to the Production Policies table, which set that at All_DCs the finished goods can be produced using the 2 BOMs. The Unit Cost set on this table represents the cost of inspecting each returned bag and deciding whether it can be reused.

With all the changes made on the input side, we can run the S1 Include Returns scenario (which was copied and renamed to “S1 Include Returns w BOM”). We will briefly look at how these changes affect the outputs.
In the Optimization Return Summary output table, users will notice that the Product Name is still either Bag_1 or Bag_2, but that the Return Product Name is either Bag_1_Return (for Bag_1) or Bag_2_Return (for Bag_2). The quantities are the same as before, since the return ratios are unchanged.

When looking at records of Flow Type = Return, we now see that the Product Name on these flows is that of the “_Return” products.

In this output table, we see that Bag_1 and Bag_2 are no longer only originating from the main DC in Cincinnati, but also at the 2 bigger local DCs that accept returns (Clovis, CA, and Jersey City, NJ) where a percentage of the returns is converted back into sellable finished goods through the BOMs.

In this appendix we will cover all fields on the 2 new input tables and the 1 new output table.
User-defined variables (UDVs) are a transformative feature in Cosmic Frog’s transportation optimization algorithm (Hopper engine) that allow users to create and track custom metrics specific to their transportation needs. Once established, these variables can be seamlessly integrated into user-defined constraints and/or user-defined costs. Several example use cases are:
Before diving into Hopper’s user-defined variables, costs, and constraints, it is recommended users are familiar with the basics of building and running a Hopper model, see for example this “Getting Started with Hopper” help center article.
In this documentation, we will first describe the example model used to illustrate the UDV concepts in this help article. Next, we will cover the input and output tables available when working with user-defined variables, costs, and constraints. Finally, we will walk through the inputs and outputs of 4 UDV examples: the first two examples showcase the application of constraints to user-defined variables, while the last two examples cover how to model user-defined costs.
The characteristics of the model used to show the concepts of user-defined variables, costs, and constraints throughout this help article are as follows:
The optimized routes from the Baseline_UDV scenario are shown on this map, there are 10 routes with 10 stops each. The customers are color-coded based on the country they are in:

Filtering out the route which has stops in most countries, we find the following route which has stops on it in 4 countries Poland (1 dark blue stop), Czech Republic (7 yellow stops), Slovakia (1 orange stop), and Germany (1 red stop):

In the Input Tables part of Cosmic Frog’s Data module, there are 3 input tables in the Constraints section that can be used to configure user-defined variables, costs, and constraints:

We will take a closer look at each of these input tables now, and will also see more screenshots of these in the later sections that walk through several examples.
On this table we specify the term(s) of each variable which we wish to track or apply user-defined costs and/or constraints to. This first screenshot shows the fields which are used to define the variable, its term(s), and what the return condition is:

The next 2 screenshots show the other fields available on the Transportation User-Defined Variables input table, which are used to set the Filter Condition for the Scope. Note that several of these fields have accompanying Group Behavior fields, which are not shown in the screenshot. If a group name is used in the Asset Name, Site Name, Shipment ID, or Product Name field, the Group Behavior field specifies how the group should be interpreted: if the Group Behavior field is set to Aggregate (the default if not specified) the activity of the variable is summed over the members of the group, i.e. the variable is applied to the members of the group together. If the Group Behavior field is set to Enumerate, then an instance of the variable will be created for each member of the group individually.


Consider a route which picks up 4 shipments, Shipment #1, #2, #3, and #4, and delivers them to 3 stops on a route as shown in the following diagram. In all 3 examples that follow, the filter condition is set to Shipment ID = Shipment #3 and Site Type = Delivery. This first example shows what will be returned for the variable when Scope = Shipment and Type = Quantity:

The whole route is filtered for Delivery of Shipment #3 and we see that it is delivered to the Delivery 2 stop. Since Scope = Shipment and Type = Quantity, the resulting variable value is the quantity of this shipment, which is what the yellow outline indicates.
In the next example, we look at the same route and same filtering condition (Shipment #3, Delivery), but now Scope has been changed to Stop (Type is still Quantity):

Again, we filter the route for Delivery of Shipment #3 and we see that it is delivered to the Delivery 2 stop. Since Scope = Stop, now the variable value is the total quantity delivered to the stop (outlined in yellow again): quantity Shipment #2 + quantity Shipment #3.
The final visual example is for when the Scope is now changed to Route, while keeping all the other settings the same:

The route is again filtered for Delivery of Shipment #3, since the delivery of this shipment is on this route, we now calculate variable value as the total quantity of the route: quantity Shipment #1 + quantity Shipment #2 + quantity Shipment #3 + quantity Shipment #4, again outlined in yellow.
Next, we will also walk through a numbers example for different combinations of Scope and Type to see how these affect the calculation of the value of a variable’s term. Consider a route with 5 stops as follows:

We will calculate the value of the following 15 variables where the Scope, Type, and Product Name to filter for are set to different values. Note that all variables have just 1 term with coefficient 1, so the variable value = scaled term value.

If wanting to apply constraints to user-defined variables, this can be set up on the User-Defined Constraints input table:

To apply costs to a user-defined variable, this can be achieved by utilizing the User-Defined Costs input table:

There are 3 output tables related to user-defined costs and constraints:

We will cover each of these now and will see more screenshots of them in the sections that follow where we will walk through several example use cases.
This table lists the values of the terms of each user-defined variable. This next screenshot shows the values of the “ProductFlag” term of the “NumberOfProductsInRoute” variable for the routes of the Baseline_UDV scenario. How this variable and its term were set up can be seen in the screenshot of the transportation user-defined variables table above (Scope = Route, Type = Product Count, Coefficient = 1).

When setting up the Number Of Products In Route variable like above and not applying costs or constraints to it, it functions as a tracker so that user can easily get at this data rather than having to manipulate the transportation optimization output tables to calculate the number of products per route.
If we run a scenario “MaxOneProductPerRoute” where we include the maximum 1 product per route constraint that we have seen in the screenshot in the section further above on the User-Defined Constraints input table, the outputs in this table change as follows:

This table is a roll up to the variable level of the Optimization User-Defined Variable Term Summary output table discussed in the previous section. All the scaled terms of each variable have been added up to arrive at the variable’s value:

If costs have been applied to a user-defined variable, the results of that can be seen in this output table:

In this first example, we will see how we can track and limit the number of countries per route. For this purpose, a variable with 5 terms is set up in the Transportation User-Defined Variables table. Each term counts if a route has any stops in 1 of the 5 countries used in the model, 1 variable term for each country. Then we will apply constraints to this variable that limit the number of countries on each route to either 1 or 2. Let’s start with looking at the variable and its 5 terms in the Transportation User-Defined Variables table:

Next, we can add constraints that apply to this variable to change the behavior in the model and limit the number of countries a route is allowed to make stops in on any given route. We use the User-Defined Constraints table for this:

After running the Baseline_UDV scenario which does not include these constraints, we can have a look at the Optimization User-Defined Variable Summary output table:

We see that 3 routes make stops in just 1 country, 5 routes in 2 countries, and 1 route (route 9) makes stops in 4 countries when leaving the number of countries a route is allowed to make stops in unconstrained.
Now we want to see the impact of applying the Max One Country and Max Two Countries constraints through 2 scenarios and again we check the Optimization User-Defined Variable Summary output table after running these scenarios:

Maps are also helpful to visualize these outputs. As we saw in the introduction of the example model used throughout this documentation, these are the Baseline_UDV routes visualized on a map:

These routes change as follows in the MaxOneCountryPerRoute scenario:

Since some of these routes overlap on the map, let us filter a few out and color-code them based on the country to more easily see that indeed the routes each only make stops in 1 country:

In this example we will see how user-defined variables and constraints can be used to model truck compartments and their capacities. First, we set up 3 variables that track the amount of ambient, refrigerated, and frozen product on a route:

Without setting up any constraints that apply to these variables, they just track how much of each product is on a route, which can be within or over the actual compartment capacity. So, to set capacity limits, we can use the User-Defined Constraints table to setup constraints on these 3 variables that represent the capacity of the ambient, refrigerated, and frozen compartments of a truck:


After running the Baseline_UDV scenario where these constraints are not applied and another scenario, Compartment Capacity, where they are applied, we can take a look at the Optimization User-Defined Variable Summary output table to see the effect of the constraints (just showing routes 1 and 2 in the below screenshot):

Typically, when adding constraints, we expect routes to change – more routes may be needed to adhere to the constraints, and they may become less efficient. Overall, we would expect costs, distance, and time to increase. This is exactly what we see when comparing these outputs in the Transportation Summary output table for these 2 scenarios:

We have seen 2 examples of applying constraints to user-defined variables in the previous sections. Now, we will walk through 2 examples of applying costs to user-defined variables. The first example shows how to apply a variable cost based on how long a shipment sits on a route: we will specify a cost of $1 per hour the shipment spends on the route. First, we set up a variable that tracks how long a shipment spends on a route in the Transportation User-Defined Variables input table:

Next, the User-Defined Costs table is used to specify the cost of $1 per hour:

After running the CostPerShipmentTimeInTruck scenario which includes this user-defined cost, we can look at both the Transportation Shipment Summary and the Optimization User-Defined Cost Summary output tables to see this cost of $1 per hour has been applied:

Next, we open the Optimization User-Defined Cost Summary output table and filter for the same scenario and route (#4):

In our final example of this documentation, we will use the same variable ShipmentTimeInTruck from the previous example to set up a different type of cost. We will use it to find any shipments that are on a route for more than 10 hours and apply a penalty cost of $100 to each. This involves using a step cost for which we will also need to utilize the Step Costs table; we will start with looking at this table:

Next, we configure the penalty cost in the User-Defined Costs table:

After running a scenario in which we include the penalty cost, we can again look at the Transportation Shipment Summary and Optimization User-Defined Cost Summary output tables to see this cost in action:


Teams is an exciting new feature set on the Optilogic platform designed to enhance collaboration within Supply Chain Design, enabling companies to foster a more connected and efficient working environment. With Teams, users can join a shared workspace where all team members have seamless access to collective models and files. For a more elaborate introduction to and high-level overview of the Teams feature set, please see this “Getting Started with Optilogic Teams” help center article.
This guide will walk Administrators through the steps to set up their organization and create Teams within the Optilogic platform. For non-administrator users, there is also an “Optilogic Teams – User Guide” help center article available.
To begin, reach out to Optilogic Support at support@optilogic.com and let them know you would like to create your company’s Organization. Once they respond, they will ask you two key questions:
These questions help us determine who should have access to the Organization Dashboard, where organization administrators (“Org Admins”) can manage users, create Teams, invite Members, and more. Specifying your company’s domains also enables us to pre-populate a list of potential users—saving you time by not having to invite each colleague individually.
Once this information is confirmed, our development team will create your organization. When complete, you will be able to log in and begin using the Teams functionality.
If you have been assigned as an Organization Administrator, you can access the Organization Dashboard from the dropdown menu under your username in the top-right corner of the Optilogic platform. Click your name, then select Teams Admin from the list:

This will take you to your Organization Dashboard, where you can manage Teams and their Members.
We will first look at the Teams application within the Organization Dashboard. Here, all the organization’s teams are listed and can be managed. It will look similar to the following screenshot:


In List View format, the Teams application looks as follows and the same sections of the team edit form mentioned in the above bullets can be opened by clicking on different parts of the team’s record in the list:

In the Members application, all the organization’s members are listed, and they can be managed here:

The following diagram gives an overview of the different roles users can have when using Optilogic Teams:

From the Organization Dashboard, while in the Teams application, click the Create Team button (as seen in the screenshots in the “Teams Application for Admins” section above) to start building a new team. The Create New Team form will come up:


Once a new team is created, members will gain access to the team. If it is their first team, a new application called Team Hub will appear in their list of applications on the Optilogic platform:

Learn how to use the Team Hub application and about switching between teams and your own My Account in the “Optilogic Teams – User Guide”.
Org Admins can change existing teams by clicking on them in the Teams application while in the Organization Dashboard. Depending on where you click on the team’s card, one of 4 sections of the Edit Team form will be shown, as was also mentioned in the “Teams Application for Org Admins” section further above. When clicking on the name of the Team, the General section is shown:

The following screenshot shows the confirmation message that comes up in case an Org Admin clicks on the Delete Team button. If they want to go ahead with the removal of the team, they can click on the Delete button. Otherwise, the Cancel button can be used to not delete the Team at this time.

The second section in the Edit Team form concerns the members of the team:

In the third section of the Edit Team form the team’s appearance can be edited:

The fourth and last part of the Edit Team form is the Invites section:

Org Admins can add new users to the organization and/or to teams by clicking on the Invite Users button while in the Members application on the Organization Dashboard. The top part of the form that comes up (next screenshot), will be used to for example add a contractor who will help out your organization for an extended period of time – they become part of the organization and can be added to multiple teams:

In the second part of this form, people can be invited to a specific team without adding them to the overall organization; these are called Team-only users:


When someone has been emailed an invite to join a team, the email will look similar to the one in the following screenshot:

User can click on the “Click here” link to accept the invite. More on the next steps for a user to join a team can be found in the “Optilogic Teams – User Guide” help center article.
Roles of existing organization members and the teams they are part of can be updated by clicking on the team member in the list of Members:


In the Teams section of this form Org Admins can update which team(s) the member is part of and what role they have in those teams:

For Team-only members (people who are part of 1 or multiple specific teams, but who are not part of the Organization), a third section named “Invites” will be available on this form:

As a best practice, it is recommended to perform regular housekeeping (for example weekly) on your organization’s teams and their members, and your organization’s members. This will prevent situations like a previous employee or temporary consultant still having access to sensitive team contents.
A user with an Org Admin role can also be part of any of the organization’s teams and work inside those or their own My Account workspace. To leave the Organization Dashboard and get back to the Optilogic platform and its applications, they can click on their name at the right top of the organization dashboard and choose “Open Optilogic Platform” from the list:

Here the Admin user can start using the Team Hub application and work collaboratively in teams, the same way as other non-Admin users do. The “Optilogic Teams – User Guide” help center article documents this in more detail.
Once you have set up your teams and added content, you are ready to start collaborating and unlocking the full potential of Teams within Optilogic!
Let us know if you need help along the way—our support team (support@optilogic.com) has your back.
We take data protection seriously. Below is an overview of how backups work within our platform, including what’s included, how often backups occur, and how long they’re kept.
Every backup—whether created automatically or manually—contains a complete snapshot of your database at the time of the backup. This includes everything needed to fully restore your data.
We support two types of backups at the database level:
Often called “snapshots,” “checkpoints,” or “versions” by users:
We use a rolling retention policy that balances data protection with storage efficiency. Here’s how it works:
Retention Tier - Time Period - What’s Retained
Short-Term - Days 1–4 - Always keep the 4 most recent backups
Weekly - Days 5–7 - Keep 1 additional backup
Bi-Weekly - Days 8–14 - Keep the newest and oldest backups
Monthly - Days 15–30 - Keep the newest and oldest backups
Long-Term - Day 31+ - Keep the newest and oldest backups
This approach ensures both recent and historical backups are available, while preventing excessive storage use.
In addition to per-database backups, we also perform server-level backups:
These backups are designed for full-server recovery in extreme scenarios, while database-level backups offer more precise restore options.
To help you get the most from your backup options, we recommend the following:
If you have additional questions about backups or retention policies, please contact our support team at support@optilogic.com.
Teams is an exciting new feature set on the Optilogic platform designed to enhance collaboration within Supply Chain Design, enabling companies to foster a more connected and efficient working environment. With Teams, users can join a shared workspace where all team members have seamless access to collective models and files. For a more elaborate introduction to and high-level overview of the Teams feature set, please see this “Getting Started with Teams” help center article.
This guide will cover how to use and take advantage of the Teams functionality on the Optilogic Platform.
For organization administrators (Org Admins), there is an “Optilogic Teams – Administrator Guide” help center article available. The Admin guide details how Org Admins can create new Teams & change existing ones, and how they can add new Members and update existing ones.
When your organization decides to start using the Teams functionality on the Optilogic platform, they will appoint one or multiple users to be the organizations’s administrators (Org Admin) who will create the Teams and add Members to these teams. Once an Org Admin has added you to a team, you will see a new application called Team Hub when logged in on the Optilogic platform. You will also receive a notification on the Optilogic platform about having been added to a team:

Note that it is possible to invite people from outside an organization to join one of your organization’s teams. Think for example of granting access to a contractor who is temporarily working on a specific project that involves modelling in Cosmic Frog. An Org Admin can invite this person to a specific team, see the “Optilogic Teams – Administrator Guide” help center article on how to do this. If someone is invited to join a team, and they are not part of that organization, they will receive an email invitation to the team. The following screenshots show this from the perspective of the user who is being invited to join a team of an organization they are not part of.
The user will receive an email similar to the one shown below. In this case the user is invited to the “Onboarding” team.

Clicking on the “Click here” link will open a new browser tab where user can confirm to join the team they are invited to by clicking on the Join Team button:

After clicking on the Join Team button, user will be prompted to login to the Optilogic platform or to create an account if they do not have one already. Once logged in, they are part of the team they were invited to and they will see the Team Hub application (see next section).
They will also see a notification in their Optilogic account:

Clicking on the notifications bell icon at the top right of the Optilogic platform will open the notifications list. There will be an entry for the invite the user received to join the Onboarding team.
Should an Org Admin have deleted the invitation before the user accepts the invite, they will get the message “Failed to activate the invite” when clicking on the Join Team button:

The Team Hub is a centralized workspace where users can view and switch between the teams they belong to. At its core, Team Hub provides team members with a streamlined view of their team’s activity, resources, and members. When first opening the Team Hub application, it may look similar to the following screenshot:

Next, we will have a look at the team card of the Cosmic Frog Team:


Note that changing the appearance of a team changes it not just for you, but for all members of the team.
When clicking on a team or My Account in the Team Hub, user will be switching into that team and all the content will be that of the team. See also the next section “Content Switching with Team Hub” where this is explained in more detail. When switching between teams or My Account, first the resources of the team you are switching to will be loaded:

Once all resources are loaded, user can click on the Close button at the bottom or wait until it automatically closes after a few seconds. We will first have a look at what the Team Hub looks like for My Account, the user’s personal account, and after that also cover the Team Hub contents of a team.

The overview of a team in the Team Hub application can look similar to following screenshot:

Note that as a best practice, users can start using the team’s activity feed instead of written / verbal updates from team members to understand the details of who worked on what when.
One of the most important features of the Team Hub application is its role as a content switcher. By default, when you log into the Optilogic platform, you’ll see only your personal content (My Account)—similar to a private workspace or OneDrive.
However, once you enter Team Hub and select a specific team, the Explorer automatically updates to display all files and databases associated with that team. This team context extends across the entire Optilogic platform. For example, if you navigate to the Run Manager, you’ll only see job runs associated with the selected team.
By switching into a team, all applications and data within the platform are scoped to that team. We will illustrate this with the following screenshots where user has switched to the team named “Cosmic Frog Team”.


Besides the “Cosmic Frog Team” team, this user is also part of the Onboarding team, which they have now switched to using the Team Hub application. Next, they open Resource Library application:

Note that it is best practice to return to your personal space in My Account when finished working in a Team, to ensure workspace content is kept separate and files are not accidentally created in/added to the wrong team.
Once an organization and its teams are set up, the next step is to start populating your teams with content. Besides adding content by copying from the Resource Library as seen in the last screenshot above, there are two primary ways to add models or files to a team.
Navigate to the Team Hub and switch into your team space. From here, you can create new files, upload existing ones, or begin building new models directly within the team. Keep in mind that any files or models created within a team are visible to all team members and can be modified by them. If you have content that you would prefer not to be accessed or edited by others, we recommend either labeling it clearly or creating it within your personal My Account workspace.

When user is in a specific team (Cosmic Frog Team here), they can add content through the Explorer (expand by clicking on the greater than icon at the top left on the Optilogic Platform): right clicking in the Explorer brings up a context menu with options to create new files, folders, and Cosmic Frog Models, and to upload files. When using these options, these are all created in / added to the active team.
You can also quickly add content to your team by using Enhanced Sharing. This feature allows you to easily select entire teams or individual team members to share content with. When you open the share modal and click into the form, you’ll see intelligent suggestions—teams you belong to and members from your organization—appear automatically. Simply click on the teams or users listed to autofill the form. To learn more about the different ways of sharing content and content ownership, please see the “Model Sharing & Backups for Multi-User Collaboration” help center article.
Please note that, regardless of how a team’s content has been created/added:
Once you have been added to any teams and have added content, you are ready to start collaborating and unlocking the full potential of Teams within Optilogic!
Let us know if you need help along the way—our support team (support@optilogic.com) has your back.
With Optilogic’s new Teams feature set (see the "Getting Started with Optilogic Teams" help center article) working collaboratively on Cosmic Frog models has never been easier: all members of a team have access to all contents added to that team’s workspace. Centralizing data using Teams ensures there is a single source of truth for files/models which prevents version conflicts. It also enables real-time collaboration where files/models are seamlessly shared across all team members, and updates to any files/models are instantaneous for all team members.
However, whether your organization uses Teams or not, there can be a need to share Cosmic Frog models, for example to:
In this documentation we will cover how to share models, and the different options for sharing. Sharing models can be from an individual user or a team to an individual user or a team. As the risk of something undesirable happening with the model when multiple people work on it increases, it is important to be able to go back to a previous version of the model. Therefore, it is best practice to make a backup of a model prior to sharing it. Continue making backups when important/major changes are going to be made or when wanting to try out something new. How to make a backup of a model will be explained in this documentation too and will be covered first.
A backup of a model is a snapshot of its exact state at a certain point in time. Once a backup has been made, users can use them to revert to if needed. Initiating the creation of a backup of a Cosmic Frog model can be done from 3 locations within the Optilogic platform: 1) from the Models module within Cosmic Frog, 2) through the Explorer and 3) from within the Cloud Storage application on the Optilogic platform. The option from within Cosmic Frog will be covered first:

When in the Models module of Cosmic Frog (its start page), hover over the model you want to create a backup for, and click on the 5th icon that comes up at the bottom right of the model card.
From the Cloud Storage application it works as follows:

Through the Explorer, the process is similar:

Whether from the Models module within Cosmic Frog, through the Cloud Storage application or via the Explorer, in all 3 cases the Create Backup form comes up:

After clicking on Confirm, a notification at the top of the user’s screen will pop up saying that the creation of a backup has been started:

At the same time, a locked database icon with hover over text of “Backup in progress…” appears in the Status field of the model database (this is in the Cloud Storage application’s list of databases):

This locked database icon will disappear again once the backup is complete.
Users can check progress of the backup by going to the user’s Account menu under their username at the right top of the screen and selecting “Account Activity” from the drop-down menu:

To access any backups, users can expand individual model databases in the Cloud Storage application:

There are 2 more columns in the list of databases that are not shown in the screenshot above:

When choosing to restore a backup, the following form comes up:

Now that we have discussed how models can be backed up, we will cover how models can be shared. Note that it is best practice to make a backup of your model before sharing it.
If your organization uses Teams, first make sure you are in the correct workspace, either a Team’s or your personal My Account area, from which you want to share a model. You can switch between workspaces using the Team Hub application, which is explained in this "Optilogic Teams - User Guide" help center article.
Like making a backup of a model database, sharing a model can also be done through the Cloud Storage application and the Explorer. Starting with the Cloud Storage option:

The Share Model options can also be accessed through the Explorer:

Now we will cover the steps of sending a copy of a model to another user or team. The original and the copy are not connected to each other after the model was shared in this way: updates to one are not reflected in the other and vice versa.


After clicking on the Send Model Copy button, a message that says “Model Copy Sent Successfully” will be displayed in the Send Model Copy form. Users can go ahead and send copies of other models to other user(s)/teams(s) or close out of the form by clicking on the cross icon at the right top of the form.
In this example, a copy of the CarAssembly model was sent to the Onboarding team. In the Onboarding team’s workspace this model will then appear in the Explorer:

Next, we will step through transferring ownership of a model to another user or team. The original owner will no longer have access to the model after transferring ownership. In the example here, the Onboarding team will transfer ownership of the Tariffs model to an individual user.


After clicking on the Transfer Model Ownership button, a message that says “Transferred Ownership Successfully” will be displayed in the Transfer Model Ownership form. Users can go ahead and transfer ownership of other models to other user(s)/teams(s) or close out of the form by clicking on the cross icon at the right top of the form.
There will be a notification of the model ownership transfer in the workspace of the user/team that performed the transfer:

The model now becomes visible in the My Account workspace of the individual user the ownership of the model was transferred to:

Lastly, we will show the steps of sharing access to a model with a user or team. Note that Sharing Access to a model can be done from Explorer and from the Cloud Storage application (same as for the Send Copy and Transfer Ownership options), but can also be done from the Models module in Cosmic Frog:

In Cosmic Frog's Models module, hover over the model card of the model you wans to share access to and then click on the 4th icon that comes up in the bottom right of the model card.
In our walk-through example, an individual user will share access to a model called "Fleet Size Optimization - EMEA Geo" with the Onboarding team.



After the plus button was clicked to share access of the Fleet Size Optimization - EMEA Geo model with the Onboarding team, this team is now listed in the People with access list:

Now, in the Onboarding team’s workspace, we can access this model, of which the team receives a notification too:

Now that the Onboarding team has access to this model, they can share it with other users/teams too: they can either send a copy of it or share access, but they cannot transfer ownership as they are not the model’s owner.
In the Explorer of the workspace of the user/team who shared access to the model, a similar round icon with arrow inside it will be shown next to the model’s name. The icon colors are just inverted (blue arrow in white circle) and here the hover text is “You have shared this database”, see the screenshot below. There will also be a notification about having granted access to this model and to whom (not shown in the screenshot):

If the model owner decides to revoke access or change the permission level to a shared model, they need to open the Share Model Access form again by choosing Share Access from the Share Model options / clicking on the Share icon when hovering over the model's card on the Cosmic Frog start page:

If access to a model is revoked, the team/user that was previously granted access but now no longer will have access, receives a notification about this:

With Read-Only access, teammates and stakeholders can explore a shared model, view maps, dashboards, and analytics, and provide feedback — all while ensuring that the data remains unchanged and secure.
Read-Only mode is best suited for situations where protecting data integrity is a priority, for example:
See the Appendix for a complete list of actions and whether they are allowed in Read-Only Access mode or not.
Similar to revoking access to a previously shared model, in order to change the permission level of a shared model, user opens the Share Model Access form again by choosing Share Access from the Share Model options / clicking on the Share icon when hovering over the model's card on the Cosmic Frog start page:

Models with Read-Only access can be recognized on the Optilogic platform as follows:

Input tables of Read-Only Cosmic Frog models are greyed out (like output tables already are by default), and and write actions (insert, delete, modify) are disabled:

Read-Only models can be recognized as follows in other Optilogic applications:
When working with models that have shared access, please keep following in mind:
In addition to the various ways model files can be shared between users, there is a way to share a copy of all contents of a folder with another user/team too:

After clicking on the Create Share Link button, the share link is copied to the clipboard. A toast notification of this is temporarily displayed at the right top in the Optilogic platform. The user can paste the link and send it to the user(s) they want to share the contents of the folder with.
When a user who has received the share link copies it into their browser while logged into the Optilogic platform, the following form will be opened:

Folders copied using the share link option will be copied into a subfolder of the Sent To Me folder. The name of this subfolder will be the username / email of the user / team that sent the share link. The file structure of the sent folder will be maintained and appear the same as it was in the account of the sender of the share link.
See the View Share Links section in the Getting Started with the Explorer help center article on how to manage your share links.
Action - Allowed? - Notes:
Optilogic introduces the Lumina Tariff Optimizer – a powerful optimization engine that empowers companies to reoptimize supply chains in real-time to reduce the effects of tariffs. It provides instant clarity on today’s evolving tariff landscape, uncovers supply chain impacts, and recommends actions to stay ahead – now and into the future.
Manufacturers, distributors, and retailers around the world are faced with an enormous task trying to keep up with changing tariff policies and their supply chain impact. With Optilogic’s Lumina Tariff Optimizer, companies can illuminate their path forward by proactively designing tariff mitigation strategies that automatically consider the latest tariff rates.
With Lumina Tariff Optimizer, Optilogic users can stay ahead of tariff policy and answer critical questions to take swift action:
The following 7-minute video gives a great overview of the Lumina Tariff Optimizer tools:
Optilogic’s Lumina Tariff Optimization engine can be leveraged by modelers within Cosmic Frog or be leveraged within a Cosmic Frog for Excel app for other stakeholders across the business to evaluate the tariff impact to their end-to-end supply chain. Optilogic enables users to get started quickly with Lumina with several items in the Resource Library that include:
This documentation will cover each of these Lumina Tariff Optimizer tools, in the same order as listed above.
The first tool in the Lumina Tariff Optimizer toolset is the Tariffs example model which users can copy to their own account from the Resource Library. We will walk through this model, covering inputs and outputs, with emphasis on how to specify tariffs and their impact on the optimal solution when running network optimization (using the Neo engine) on the scenarios in the model.
Let us start by looking at the map of the Tariffs model, which is showing the model locations and flows for the Baseline scenario:

This model consists of the following sites:
Next, we will have a look at the Products table:

As mentioned above, raw materials RM1, RM2, and RM3 are supplied by Chinese suppliers and the others 6 raw materials by European suppliers, which we can confirm by looking at the Supplier Capabilities input table:

The Bills Of Materials input table shows that each finished good takes 3 of the Raw Materials to be manufactured; the Quantity field indicates how much of each is needed to create 1 unit of finished good:

Looking at the Production Policies input table, we see that both the US and Mexico factory can produce Consumables, but Rockets are only manufactured in Mexico and Space Suits only in the US:

To understand the outputs later, we also need to briefly cover the Flow Constraints input table, which shows that the El Bajio Factory in Mexico can at a maximum ship out 3.5M units of finished goods (over all products and the model horizon together):

To enter tariffs and take them into account in a network optimization (Neo) run, users need to populate the new Tariffs input table:

There are also 2 new Neo output tables that will be populated when tariffs are included in the model, the Optimization Path Flow Summary and the Optimization Tariff Summary tables:

Tariffs can be specified at multiple levels in Cosmic Frog, so users can choose the one that fits their modeling needs and available data best:
In order to model tariffs from/to a region or country, these fields need to be populated in the Customers, Facilities, and Suppliers tables:

In the Tariffs input table, all path origin location (furthest upstream) – path destination location (furthest downstream) – product combinations to which tariffs need to be applied are captured. There can be any number of echelons in between the path origin location and path destination location where the product flows through. Consider the following path that a raw material takes:

The raw material is manufactured/supplied from China (the path origin), it then flows through a location in Vietnam, then through a location in Mexico, before ending its path in the USA (the path destination, where it is consumed when manufacturing a finished good). In this case the tariff that is set up for this raw material with path origin = China, and path destination = USA will be applied. The tariff will be applied to the segment of the path where the product arrives in the region / country of its final destination. In the example here, that is on last leg (/lane / segment) of the path, e.g. on the Mexico to USA lane.
If we have a raw material that takes the same initial path, except it ends in Mexico to be consumed in a finished good, then the tariff that is set up for this raw material with path origin = China and path destination = Mexico will be applied. To continue from this example: then if this finished good manufactured in Mexico is shipped to the US and sold there, and if there is a path with a tariff set up from Mexico to USA for the finished good, then that tariff will be applied (path origin = Mexico, path destination = USA). I.e. in this last example the entire path is just the 1 segment between Mexico and USA.
So, now we will look how this can be set up in the Tariffs input table:

Please note:
Three scenarios were run in the Tariffs example model:

Now, we will look at the outputs for these 3 scenarios, first at a higher level and later on, we will dig into some details of how the tariff costs are calculated as well.
The Financials stacked bar chart in the standard Optimization Scenario Comparison dashboard in the Analytics module of Cosmic Frog can be used to compare all costs for all 3 scenarios in 1 graph:

To compare the Tariffs by path origin – path destination and product, a new “Optimization Tariffs Summary” dashboard was created. We will look at the Baseline New Tariffs scenario first, and the Optimized New Tariffs scenario next:


Note that in the Appendix it is explained how this chart can be created.
Next, we will take a closer look at some more detailed outputs. Starting with how much demand there is in the model for Rockets and Consumables, the 2 finished goods the Mexican factory in El Bajio can manufacture. The next screenshot shows the Optimization Demand Summary network optimization output table, filtered for Rockets and with a summation aggregation applied to it to show the total demand for Rockets at the bottom of the grid:

Next, we change the filter to look at the Consumables product:

In conclusion: the demand for Rockets is nearly 3.5M units and for Consumables nearly 10.5M. Rockets can only be produced in Mexico whereas Consumables can be produced by both factories. From the charts above we suspected a shift in production from US to Mexico for the Consumables finished good in the Optimized New Tariffs scenario, which we can confirm by looking at the Optimization Production Summary output table:

Since the production of Consumables requires raw materials RM1, RM2, and RM3, we expect to see the above production quantities for Consumables to be reflected in the amount of these raw materials that was moved from the suppliers in China to the US vs to Mexico. We can see this in the Optimization Flow Summary network optimization output table, which is filtered for the 2 scenarios with new tariffs, Port to Port lanes, and these 3 raw materials:

The custom Optimization Tariff Summary and Optimization Path Flow Summary output tables are automatically generated after running a network optimization on a model with a populated Tariffs table. The first of these 2 is shown in the next screenshot where we have filtered out the raw materials RM1, RM2, and RM3 again, plus also the Consumables finished good for the 2 scenarios that use the new tariffs:

Where the Optimization Tariff Summary output table summarizes the tariffs at the scenario - path origin – path destination – product level, the Optimization Path Flow Summary output table gives some more detail around the whole path, and on which segments the tariffs are applied. The next 2 screenshots show 6 records of this output table for the Tariffs example model:

For the 2 scenarios that use the new tariffs, records are filtered out for raw material RM1 where the Path Start Location represents the CN region and the Path End Location represents the MX region. These Path Start and End Locations are automatically generated based on the Path Origin Property and Value and Destination Property and Value set in the Tariffs input table. Scrolling right for these 6 records:

We see that the path for RM1 is the same in both scenarios: originate at location Guangzhou in China, moved to Shanghai Port (CN), from Shanghai Port moved to Altamira Port (MX), and from Altamira Port moved to the El Bajio Factory (MX). The calculations of the Tariff Cost based on the Flow Quantity are the same as explained above, and we see that the tariffs are applied on the second segment where the product arrives in the region / country of its final destination.
Wondering where to go from here? If you are wanting to start using tariffs in your own models, but are not exactly sure where to start, please see the “Cosmic Frog Utilities to Create the Tariffs Table” section further below, which also includes step-by-step instructions based on what data you have available.
In the next section, we will first discuss how quick sensitivity analyses around tariffs can be run using a Cosmic Frog for Excel App.
To enable Cosmic Frog users, and also managers and executives with no or limited knowledge of Cosmic Frog, to run quick sensitivity scenarios around changing tariffs, Optilogic has developed an Excel Application for this specific purpose. Users can connect to their Cosmic Frog model that contains a populated Tariffs input table and indicate which tariffs to increase/decrease by how much, run network optimization with these changed tariffs, and review the optimization tariff summary output table, all in 1 Excel workbook. Users can download this application and related files from the Resource Library.
The following represents a typical workflow when using the Tariffs Rapid Optimizer application:


For users to take advantage of the power of the Lumina Tariff Optimizer they will want to create their own network optimization model which includes a populated Tariffs input table (see also the “Tariffs Model – Tariffs Table” section earlier in this documentation). Depending on the data available to the user, populating the Tariffs input table can be a straightforward task or a difficult one in case no or little data around tariffs is known/available within the organization. Optilogic has developed 3 utilities to help users with this task. The utilities are available from within Cosmic Frog, which will be covered in this section of the documentation, and they are also available through the Cosmic Frog for Excel Tariffs Builder App, which will be covered in the next section. Here follows a short description of each utility, they will each be covered in more detail later in this section:
In Cosmic Frog, they are accessible from the Utilities module (click on the 3 horizontal bars icon at the top left in Cosmic Frog to open the Module menu drop-down and select Utilities):

The utilities are listed under System Utilities > Tariff.
The latter 2 utilities hook into Avalara APIs, and users need to use / obtain their own Avalara API keys for each to be able to use these utilities from within Cosmic Frog or the Tariffs Builder Excel App.
The following list shows the recommended steps for users with varying levels of Tariffs data available to them from least to most data available (assuming an otherwise complete Cosmic Frog model has been built):
To populate the Tariffs table with all possible path origin – path destination – product combinations, based on the contents of the Transportation Policies input table, use this first utility:

Consider a small model with 1 customer in the US, 2 facilities (1 DC and 1 factory) both in the US, 1 supplier in China, and 2 products (1 finished good and 1 component):




After running the 1 Generate Tariff Paths utility (using Region as the data to use for the path origin and path destination), the Tariffs table is generated and populated as shown in the next 2 screenshots:

All combinations for path origin region, path destination region, and product have been added to the Tariffs table. Scrolling further right, we see the remaining fields of this table:

To update the HS Code field in the Tariffs table, we can use the second utility:

Users can find the full path of a file uploaded to their Optilogic account as follows:

The file containing the product master data needs to have the same columns as shown in the next screenshot:

Note that columns B-F contain information of products that do not match the product name in Cosmic Frog as this is just an example to show how the utility works.
After running the 2 HS Code Classification utility, we see that the HS Code field in the Tariffs table is now populated:

To use the HS Code field to next look up duty rates we can use the third utility:

After running the 3 Lookup Duty Rates utility, we see that the Duty Rate field in the Tariffs table is now populated:

The raw output from the API is placed in the Duty Rate field and user needs to update this so that the field contains just a number representing the total duty rate. For the second record (US region to China region for product RM), the total duty rate is 35% (25% + 10%), and user needs to enter 35 in this field. For the third record (China region to US region for product Rockets), the duty rate is 27.5% (7.5% + 20%), and user needs to enter 27.5 in this field. For the fourth record (China region to US region for product RM), the total duty rate is 25%, and user needs to enter 25 in this field.
When running a utility in Cosmic Frog, user can track the progress in the Model Activity window:

The 3 utilities covered in the previous section to generate and populate the Tariffs input table are also made available in the Cosmic Frog for Excel Tariffs Builder App, which we will cover in this section. Users can download this application and related files from the Resource Library.
The following represents a typical workflow when using this Tariffs Builder application:

The next screenshot shows the Tariffs table after just running the Build Tariff workflow (bullet 4 in the list above):

The next screenshot shows the Product Master worksheet which contains the product information to be used by the HS Code Classification workflow, it needs to be in this format and users should enter as much product information in here as possible:

After also running the HS Code Classification and the Duty Rate Lookup workflows (bullets 6 and 7 in the list further above), we see that these fields are now also populated on the Tariffs worksheet:

We hope users feel empowered to take on the challenging task of incorporating tariffs into their optimization workflows. For any questions, please do not hesitate to contact Optilogic support on support@optilogic.com.
In this appendix we will show users how to create a stacked bar chart for each path origin – path destination pair, showing the tariff costs by product.
In the Analytics drop-down menu in the toolbar while in the Analytics module of Cosmic Frog, select New Dashboard, give it a name (e.g. Optimization Tariff Summary), then click on the blue Visualization button on the top right to create a new chart for the dashboard. In the New Visualization configuration form that comes up, type “tariff” in the Tables Search box, then check the box for the Optimization Tariff Summary table in the list, and click on Select Data.

To create the OD Path calculated field, click on the plus icon at the top right of the Fields list and select Calculated Field which brings up the Edit Calculated Field configuration window:

Cosmic Frog supports importing and exporting both CSV and Excel files directly through the application. This enables users to for example:
In this documentation we will cover how users can import and export data into and out of Cosmic Frog, and illustrate this with multiple examples.
There are 2 methods of importing Excel/CSV data into Cosmic Frog’s input tables available to users:
Pointers on how data to be imported needs to be formatted will be covered first, including some tips and call outs of specifics to keep in mind when using the upsert import method. Next, the steps to import a CSV/Excel file will be walked through step by step.
Data is mapped from CSV/Excel files based on matching column names and table names matching to the file name (CSV) or worksheet name (Excel):
Data preparation tips:

CSV vs Excel: CSV files only have 1 “worksheet”, so it can only contain data to be imported into 1 table, whereas Excel files can have multiple worksheets with data to be imported to different tables in Cosmic Frog.
Please take note of how existing records are treated when using the upsert import method to import to a table which already has some data in it:
We will illustrate these behaviors through several examples too.
Users can import 1 or multiple CSV or Excel files simultaneously, please take note of how the import will work for following situations:
Once ready to import the prepared CSV/Excel file(s), user has 2 ways of accessing the import and export methods: from the File menu in the toolbar and from the right-click context menu of an input table. It looks like this from the File menu to import a file:

And when using the right-click context menu the steps to import a file are as follows:

When using the replace import method, a confirmation message will now be shown on which user can click Import to continue the import or Cancel to abort.
Next, a file explorer window opens in which user can browse to and select the CSV/Excel file(s) to import:

Once the import starts, a status message shows at the top of the active table:

The Model Activity log will also have an entry for each import action:

User can see the results of the import by opening and inspecting the affected input table(s), and by looking at the row counts for the tables in the input tables list, outlined in green in this screenshot:

A common way to start building a new model in Cosmic Frog is to make use of the replace import method to populate multiple tables simultaneously with data from Excel or CSV files. These files have typically been prepared from ERP extracts which have been manipulated to match the Cosmic Frog table and column names. This way, users do not need to enter data manually into the Cosmic Frog input tables, which would be very laborious. Note that it can be helpful to first export empty tables from a new, empty Cosmic Frog model to have a template to start filling out (see the “Exporting to CSV/Excel Files” section further below on how to do this).
Starting with an empty new model in Cosmic Frog:

User has prepared the following Excel .xlsx file:

After importing this file into Cosmic Frog, we notice that the Customers, Facilities and Products tables now have row counts that match the number of records we had in the Excel file that was used for the import, and we can open the individual tables to see the imported records:

Consider user is modelling a sports equipment company and has populated the Products table of a Cosmic Frog model with 8 products as follows:

After working with the model for a while, the user realizes a few things:
As item number 1 will change the product names, a column that is part of the primary key of the Products table, user will need to use the replace import method to make these changes as the upsert method does not change the values of columns that are part of the primary key. Following is the .xlsx file user prepares to replace the data in the Products table with:

After importing the file using the replace method, the Products table looks like this:

We see the records are the exact same as what was contained in the Products.xlsx file that was imported, and the row count for the Products table has correctly gone up to 10 with the 2 new products added.
Continuing from the Products table in the last screenshot above, user now wants to make a few additional changes as follows:
To make these changes to the Products table, the user prepares the following Products file to be upserted to the Products table, where the green numbers in the screenshot below match the items described in the bullet point list directly above:

After using the upsert import method for this file into the Products table, it contains following records. The ones changed / added are listed at the bottom:

In the boxes outlined in green we see that all the expected changes and the insertion of the 1 new record have been made.
Let us also illustrate what will happen when files with invalid /missing data are imported. We will use the replace import method for the example here, but similar results will be seen when using the upsert method. Following screenshot shows a Products table that has been prepared in Excel, where we can see several issues already: a blank Product Name, a negative value for Unit Price, etc.

After this file is imported to the Products table using the replace method, the Products table will look as follows:

The cells that are outlined in red contain invalid values. Hovering over each cell will show a tooltip message describing the problem.
For tables with many records, it may be hard to find the fields in red outline manually. To help with this, there is a standard filter user can apply that will show all records that have 1 or multiple input data errors:

In conclusion, Cosmic Frog will let a user import invalid data, and then helps user identify the data issues with the red outlines, hover over tooltips, and the Show Input Data Errors filter.
Consider following Transportation Policies table:

There is now a change where from MFG_1 all Racket products need to be shipped by Parcel for a fixed cost of $50. User creates 2 Named Filters (see the Named Filters in Cosmic Frog help center article) in the Products table: 1 that filters out all racket products (those products that have a product name that start with FG_Racket) which is named Rackets and 1 that filters out all non-racket products (those products that do not contain racket in the product name) which is named AllExceptRackets. Next, user prepares following TransportationPolicies.csv file to upsert into the Transportation policies table with the intention to update the first 2 records in the existing table to be specific for the AllExceptRackets products and add 2 new ones for the Rackets products:

The result of using this file to upsert to the Transportation Policies table is as follows:

This example shows that users need to be mindful of which fields are part of the table’s primary key and remember that values of primary key fields cannot be changed by the upsert import method. An example workflow that will achieve the desired changes to the Transportation Policies table is as follows:
It is possible to export a single table or multiple tables (input and output tables) to CSV or Excel from Cosmic Frog. Similar to importing data from CSV/Excel, user can access the export options in 2 ways: from the File menu in the toolbar and from the context menus that come up when right-clicking on tables in the input/output/custom tables lists.
Please note:
The steps to export multiple tables to an Excel file are as follows:

Once the export starts, following message appears at the top of the active table:

Once the export is complete, the exported file can be found in the folder where user’s downloaded files are saved:

When exporting multiple tables to Excel or CSV, the downloaded file will be a .zip file with an automatically generated name based on the model’s Cosmic Frog ID. Extracting the zip-file will show an .xlsx file of the same name, which can be opened in Excel:

These are the steps to export multiple tables to CSV:

When the export starts, the same “File is exporting…” message as shown in the previous section will be showing at the top of the active table. Once the export process is finished, the exported file can again be found in the folder where user’s downloaded files are saved:

The file is again a zip-file, and it has the same name based on the model’s Cosmic Frog ID, just appended with (1), as there is already a zip-file of the same name in the Downloads folder from the previous export to Excel. Unzipping the file creates a new sub-folder of the same name in the Downloads folder:

Exporting a single table to Excel can also be done from the File menu, in the same way as multiple tables are exported to Excel, which was shown above in the “Export Multiple Tables to Excel” section. Now, we will show the second way of doing this by using the context menu that comes up when right-clicking on a table:

When the export starts, the same “File is exporting…” message as shown above will be showing at the top of the active table. Once the export process is finished, the exported file can again be found in the folder where user’s downloaded files are saved:

The name of the exported CSV file matches that of the table that was exported.
Exporting a single table to CSV can also be done from the File menu, in the same way as multiple tables are exported to CSV, which was shown above in the “Export Multiple Tables to CSV” section. Now, we will show the second way of doing this by using the context menu that comes up when right-clicking on a table:

When the export starts, the same “File is exporting…” message as shown above will be showing at the top of the active table. Once the export process is finished, the exported file can again be found in the folder where user’s downloaded files are saved:

For single tables exported to CSV, the name of the file is the same as the name of the exported table. If the Cosmic Frog table was filtered, the file name is appended with “_filtered” like it is here to remind user that only the filtered rows are contained in this exported file.
Tax systems can be complex, like for example those in Greece, Colombia, Italy, Turkey, and Brazil are considered to be among the most complex ones. It can however be important to include taxes, whether as a cost or benefit or both, in supply chain modeling as they can have a big impact on sourcing decisions and therefore overall costs. Here we will showcase an example of how Cosmic Frog’s User Defined Variables and User Defined Costs can be used to model Brazilian ICMS tax benefits and take these into account when optimizing a supply chain.
The model that is covered in this documentation is the “Brazil Tax Model Example” which was put together by Optilogic’s partner 7D Analytics. It can be downloaded from the the Resource Library. Besides the Cosmic Frog model, the Resource Library content also links to this “Cosmic Frog – BR Tax Model Video” which was also put together by 7D Analytics.
A helpful additional resource for those unfamiliar with Cosmic Frog’s user defined variables, costs, and constraints is this “How to use user defined variables” help article.
In this documentation the setup of the example model will first be briefly explained. Next, the ICMS tax in Brazil will be discussed at a high level, including a simplified example calculation. In the third section, we will cover how ICMS tax benefits can be modelled in Cosmic Frog. And finally, we will look at the impact of including these ICMS tax benefits on the flows and overall network costs.
One quick note upfront is that the screenshots of Cosmic Frog tables used throughout this help article may look different when comparing to the same model in user’s account after taking it from the Resource Library. This is due to columns having been moved or hidden and grids being filtered/sorted in specific ways to show only the most relevant information in these screenshots.
In this example model, 2 products are included: Prod_National to represent products that are made within Brazil at the MK_PousoAlegre_MG factory and Prod_Imported to represent products that are imported, which is supplied from SUP_Itajai_SC within the model, representing the seaport where imported products would arrive. There are 6 customer locations which are in the biggest cities in Brazil; their names start with CLI_. There are also 3 distribution centers (DCs): DC_Barueri_SP, DC_Contagem_MG, and DC_FeiraDeSantana_BA. Note that the 2 letter postfixes in the location names are the abbreviations of the states these locations are in. Please see the next screenshot where all model locations are shown on a map of Brazil:

The model’s horizon is all of 2024 and the 6 customers each have demand for both products, ranging from 100 to 600 units. The SUP_ location (for Prod_Imported) and MK_ location (for Prod_National) replenish the DCs with the products. Between the DCs, some transfers are allowed too. The demand at the customer locations can be fulfilled by 1, 2 or all 3 DCs, depending on the customer. The next screenshot of the Transportation Policies table (filtered for Prod_National) shows which procurement, replenishment, and customer fulfillment flows are allowed:


For the other product modelled, Prod_Imported, the same customer fulfillment, DC-DC transfer, and supply options are available, except:
In Brazil, the ICMS tax (Imposto sobre Circulaçao de Mercadorias e Serviços, or Tax on Commerce and Services) is levied by the states. It applies to movement of goods, transportation services between several states or municipalities, and telecommunication services. The rate varies and depends on the state and product.
When a company sells a product, the sales price includes ICMS, which results in an ICMS debit for the company (the company owes this to the state). Likewise, when purchasing or transferring product, the ICMS is included in what the company pays the supplier. This creates ICMS credit for the company. The difference between the ICMS debits and credits is what the company will pay as ICMS tax.
The next diagram shows an ICMS tax calculation example, where company also has a 55% tax benefit which is a discount on the ICMS it needs to pay.

In order to include ICMS tax benefits in a model, we need to be able to calculate ICMS debits and credits based on the amount of flow between locations in different states for both national and imported products. As different states and different products can have different ICMS rates, we need to define these individual flow lanes as variables and apply the appropriate rate to each. This can be done by utilizing the User Defined Variables and User Defined Costs input tables, which can be found in the “Constraints” section of the Cosmic Frog input tables, shown in the below screenshot (here user entered a search term of “userdef” to filter out these 2 tables):

In the User Defined Variables table, we will define 3 variables related to DC_Contagem_MG: one that represents the ICMS Debits, one that represents the ICMS Credits, and one that represents the ICMS Balance (= ICMS Debits – ICMS Credits) for this DC. The ICMS Debits and ICMS Credits variables have multiple terms that each represents a flow out of or a flow into the Contagem DC, respectively. Let us first look at the ICMS Debits variable:

Still looking at the same top records that define the DC_Contagem_MG|ICMS_Debit variable, but freezing the Variable Name and Term Name columns and scrolling right, we can see more of the columns in the User Defined Variables table:

Note that there are quite a few custom columns in this table (not shown in the screenshots; can be added through Grid > Table > Create Custom Column), which were used to calculate the ICMS rates outside of the model. These are helpful to keep in the model, should changes need to be made to the calculations.
Next, we will have a look at the ICMS Credit variable, which is made up of 3 terms, where each term represents a possible supply/replenishment flow into the Contagem DC:

The last step on the User Defined Variables table is to combine the ICMS Credit and ICMS Debit variables to calculate the ICMS balance:

To finalize the setup, we need to add 1 record to the User Defined Costs table, where we will specify that the company has a 55% discount (tax incentive) for the ICMS it pays relating to the Contagem DC:

As mentioned in the previous section, all records in the User Defined Variables and User Defined Costs tables have their Status set to Exclude. This way, when the Baseline scenario is run, the ICMS tax incentive is not included, and the network will be optimized just based on the costs included in the model (in this case only transportation costs). We want to include the ICMS tax incentive in a scenario and then compare the outputs with the Baseline scenario. This “IncludeDCMGTaxBenefit” scenario is set up as follows:

Next, we have a look at the second scenario item that is part of this scenario:

With the scenario set up, we run a network optimization (using the Neo engine) on both scenarios and then first look in the Optimization Network Summary output table:

Notice that the Baseline scenario as expected only contains transportation costs, while the IncludeDCMGTaxBenefits scenario also contains user defined costs, which represent the calculated ICMS tax benefit and have a negative value. So, overall, the IncludeDCMGTaxBenefit scenario has about R$ 331k lower total cost as compared to the Baseline scenario, even though the transportation costs are close to R$ 47k higher. Since the transportation costs are different between the 2 scenarios, we expect some of the flows have changed.
There are 3 network optimization output tables that contain the outputs related to User Defined Variables and Costs:

We will first discuss the Optimization User Defined Variable Term Summary output table:

The Optimization User Defined Variable Summary output table contains the outputs at the variable level (e.g. the individual terms of the variables have been aggregated):

Finally, the Optimization User Defined Cost Summary output table shows the cost based on the 55% benefit that was set:

The DC_Contagem_MG_TaxIncentive benefit is calculated from the DC_Contagem_MG|ICMS_Balance variable, where the Variable Value of R$ 686,980 is multiplied by -0.55 to arrive at the Cost value of R$ -377,839.
Now that we understand at a high level the cost impact of the ICMS tax incentive and the details of how this was calculated, let us look at more granular outputs, starting with looking at the flows between locations. Navigate to the Maps module within Cosmic Frog and open the maps named Baseline and Include DC MG Tax Benefit, which show outputs from the Baseline and IncludeDCMGTaxBenefit scenarios, respectively. The next 2 screenshots show the flows from DCs to customer locations: Baseline flows in the top screenshot and scenario “Include DC MG Tax Benefit” flows in the bottom screenshot:


We see that in the Baseline the customer in Rio de Janeiro is served by the DC in Sao Paulo. This changes in the scenario where the tax benefit is included: now the Rio de Janeiro customer is served by the Contagem DC (located close to Belo Horizonte). The other customer fulfillment flows are the same between the 2 scenarios.
This model also has 2 custom dashboards set up in the Analytics module; the 1. Scenarios Overview dashboard contains 2 graphs:

This Summary graph shows the cost buckets for each scenario as a bar chart. As discussed when looking at the Optimization Network Summary output table, the IncludeDCMGTaxBenefit scenario has an overall lower cost due to the tax benefit, which offsets the increased transportation costs as compared to the Baseline scenario.

This Site Summary bar chart shows the total outbound quantity for each DC / Factory / Supplier by scenario. We see that the outbound flow for the DC in Barueri is reduced by 500 units in the IncludeDCMGTaxBenefit scenario as compared to the Baseline scenario, whereas the Contagem DC has an increased outbound flow, from 1,000 to 2,500 units. We can examine these shifts in further detail in the second custom dashboard named 2. Outbound Flows by Site, as shown in the next 2 screenshots:

This first screenshot of the dashboard shows the amount of flow from the 3 DCs and the factory to the 6 customer locations. As we already noticed on the map, the only shift here is that the Rio De Janeiro customer is served by the Barueri DC in the Baseline scenario and this changes to it being served by the Contagem DC in the IncludeDCMGTaxBenefit scenario.

Scrolling further right in this table, we see the replenishment flows from the 3 DCs and the Factory to the 3 DCs. There are some more changes here where we see that the flow from the factory to the Barueri DC is reduced by 500 units in the scenario, whereas the flow from the factory to the Contagem DC is increased by 500 units. In the Baseline, the Barueri DC transferred a total of 1,000 units to the other 2 DCs (500 each to the Contagem and Feira de Santana DCs), and the other 2 DCs did not make DC transfers. In the Tax Benefit scenario, the Barueri DC only transfers to the Contagem DC, but now for 1,500 units. We also see that the Contagem DC now transfers 500 units to the Feira de Santana DC, whereas it did not make any transfers in the Baseline scenario.
We hope this gives you a good idea of how taxes and tax incentives can be considered in Cosmic Frog models. Give it a go and let us know of any feedback and/or questions!
Utilities enable powerful modelling capabilities for use cases like integration to other services or data sources, repeatable data transformation or anything that can be supported by Python! System Utilities are available as a core capability in Cosmic Frog for use cases like LTL rate lookups, TransitMatrix time & distance generation, and copying items like Maps and Dashboards from one model to another. More useful System Utilities will become available in Cosmic Frog over time. Some of these System Utilities are also available in the Resource Library where they can be downloaded from, and then customized and made available to modelers for specific projects or models. In this Help Article we will cover both how to use use System Utilities as well as how to customize and deploy Custom Utilities.
The “Using and Customizing Utilities” resource in the Resource Library includes a helpful 15-minute video on Cosmic Frog Model Utilities and users are encouraged to watch this.
In this Help Article, System Utilities will be covered first, before discussing the specifics of creating one’s own Utilities. Finally, how to use and share Custom Utilities will be explained as well.
Users can access utilities within Cosmic Frog by going to the Utilities section via the Module Menu drop-down:

Once in the Utilities section, user will see the list of available utilities:

The appendix of this Help Article contains a table of all System Utilities and their descriptions.
Utilities vary in complexity by how many input parameters a user can configure and range from those where no parameters need to be set at all to those where many can be set. Following screenshot shows the Orders to Demand utility which does not require any input parameters to be set by the user:

The Copy map to a model utility shown in the next screenshot does require several parameters to be set by the user:

When the Run Utility button has been clicked, a message appears beneath it briefly:

Clicking on this message will open the Model Activity pane to the right of the tab(s) with open utilities:


Users will not only see activities related to running utilities in the Model Activity list. Other actions that are executed within Cosmic Frog will be listed here too, like for example when user has geocoded locations by using the Geocode tool on the Customers / Facilities / Suppliers tables or when user makes a change in a master table and chooses to cascade these changes to other tables.
Please note that the following System Utilities have separate Help Articles where they are explained in more detail:
The utilities that are available in the Resource Library can be downloaded by users and then customized to fit the user’s specific needs. Examples are to change the logic of a data transformation, apply similar logic but to a different table, etc. Or users may even build their own utilities entirely. If a user updates a utility or creates a new one, they can share these back with other users so they can benefit from them as well.
Utilities are Python scripts that contain a specific structure which will be explained in this section. They can be edited directly in the Atlas application on the Optilogic platform or users can download the Python file that is being used as a starting point and edit it using an IDE (Integrated Development Environment) installed on their computer. A rich text editor geared towards coding, like for example Visual Studio Code, will work fine too for most. An advantage of working locally is that user can take advantage of code completion features (auto-completion while typing, showing what arguments functions need, catch incorrect syntax/names, etc.) by installing an extension package like for example IntelliSense (for Visual Studio Code). The screenshots of the Python files underlying the utilities that follow in this documentation are taken while working with them in Visual Studio Code locally and on a machine that has the IntelliSense extension package installed.
A great resource on how to write Python scripts for Cosmic Frog models is this “Scripting with Cosmic Frog” video. In this video, the cosmicfrog Python library, which adds specific functionality to the existing Python features to work with Cosmic Frog models, is covered in some detail.
We will start by looking at the Python file of the very simple Hello World utility. In this first screenshot, the parts that can stay the same for all utilities are outlined in green:

Next, onto the parts of the utility’s Python script that users will want to update when customizing / creating their own scripts:

Now, we will discuss how input parameters, which users can then set in Cosmic Frog, can be added to the details function. After that we will cover different actions that can be added to the run function.
If a utility needs to be able to take any inputs from a user before running it, these are created by adding parameters in the details function of the utility’s Python script:

We will take a closer look at a utility that uses parameters and map the arguments of the parameters back to what the user sees when the utility is open in Cosmic Frog, see the next 2 screenshots: the numbers in the script screenshot are matched to those in the Cosmic Frog screenshot to indicate what code leads to what part of the utility when looking at it in Cosmic Frog. These screenshots use the Copy dashboard to a model utility of which the Python script (Copy dashboard to a model.py) was downloaded from the Resource Library.

Note that Python lists are 0-indexed, meaning that the first parameter (Destination Model in this example) is referenced by typing params[0], the second parameter (Replace of Append dashboards) by typing params[1], etc. We will see this in the code when adding actions to the run function below too.
Now let’s have a look at how the above code translates to what a user sees in the Cosmic Frog user interface for the Copy dashboard to a model System Utility (note that the numbers in this screenshot match with those in the above screenshot):

The actions a utility needs to perform are added to the run function of the Python script. These will be different for different types of utilities. We will cover the actions the Copy dashboard to a model utility uses at a high level and refer to Python documentation if user is interested in understanding all the details. There are a lot of helpful resources and communities online where users can learn everything there is to know about using & writing Python code. A great place to start is on the Python for Beginners page on python.org. This page also mentions how more experienced coders can get started with Python. Also note that text in green font that follows a hash sign are comments to add context to code.



For a custom utility to be showing in the My Utilities category of the utilities list in Cosmic Frog, it needs to be saved under My Files > My Utilities in the user’s Optilogic account:

Note that if a Python utility file is already in user’s Optilogic account, but in a different folder, user can click on it and drag it to the My Utilities folder.
For utilities to work, a requirements.txt file which only contains the text cosmicfrog needs to be placed in the same My Files > My Utilities folder (if not there already):

A customized version of the Copy dashboard to a model utility was uploaded here, and a requirements.txt file is present in the same folder too.
Once a Python utility file is uploaded to My Files > My Utilities, it can be accessed from within Cosmic Frog:

If users want to share custom utilities with other users, they can do so by right-clicking on it and choosing the “Send Copy of File” option:

The following form then opens:

When a custom utility has been shared with you by another user, it will be saved under the Sent To Me folder in your Optilogic account:

Should you have created a custom utility that you feel a lot of other users can benefit from and you are allowed to share outside of your organization, then we encourage you to submit it into Optilogic’s Resource Library. Click on the Contribute button at the left top of the Resource Library and then follow the steps as outlined in the “How can I add Python Modules to the Resource Library?” section towards the end of the “How to use the Resource Library” help article.
Utility names and descriptions by category: