In this quick start guide we will walk through the steps of exporting data from a table in the Project Sandbox to a table in a Cosmic Frog model.
Please note that DataStar is currently in the Early Adopter phase, where only users who participate in the Early Adopter program have access to it. DataStar is rapidly evolving while we work towards the General Availability release later this year. For any questions about DataStar or the Early Adopter program, please feel free to reach out to Optilogic’s support team on support@optilogic.com.
This quick start guide builds upon a previous one where unique customers were created from historical shipments using a Leapfrog-generated Run SQL task. Please follow the steps in that quick start guide first if you want to follow along with the steps in this one. The starting point for this quick start is therefore a project named Import Historical Shipments, which contains a macro called Import Shipments. This macro has an Import task and a Run SQL task. The project has a Historical Shipments data connection of type = CSV, and the Project Sandbox contains 2 tables named rawshipments(42,656 records) and customers (1,333 records).
The steps we will walk through in this quick start guide are:
First, we will create a new Cosmic Frog model which does not have any data in it. We want to use this model to receive the data we export from the Project Sandbox.
As shown with the numbered steps in the screenshot below: while on the start page of Cosmic Frog, click on the Create Model button at the top of the screen. In the Create Frog Model form that comes up, type the model name, optionally add a description, and select the Empty Model option. Click on the Create Model button to complete the creation of the new model:

Next, we want to create a connection to the just created empty Cosmic Frog model in DataStar. To do so: open your DataStar application, then click on the Create Data Connection button at the top of the screen. In the Create Data Connection form that comes up, type the name of the connection (we are using the same name as the model, i.e. “Empty CF Model for DataStar Export”),optionally add a description, select Cosmic Frog Models in the Connection Type drop-down list, click on the name of the newly created empty model in the list of models, and click on Add Connection. The new data connection will now be shown in the list of connections on the Data Connections tab (shown in list format here):

Now, go to the Projects tab, and click on the “Import Historical Shipments” project to open it. We will first have a look at the Project Sandbox and the empty Cosmic Frog model connections, so click on the Data Connections tab:

The next step is to add and configure an Export Task to the Import Shipments macro. Click on the Macros tab in the panel on the left-hand side, and then on the Import Shipments macro to open it. Click on the Export task in the Tasks panel on the right-hand side and drag it onto the Macro Canvas. If you drag it close to the Run SQL task, it will automatically connect to it once you drop the Export task:

The Configuration panel on the right has now become the active panel:

Click on the AutoMap button, and in the message that comes up, select either Replace Mappings or Add New Mappings. Since we have not mapped anything yet, the result will be the same in this case. After using the AutoMap option, the mapping looks as follows:

We see that each source column is now mapped to a destination column of the same name. This is what we expect, since in the previous quick start guide, we made sure to tell Leapfrog when generating the Run SQL task for creating unique customers, to match the schema of the customers table in Cosmic Frog models (“the Anura schema”).
If the Import Shipments macro has been run previously, we can just run the new Export Customers task by itself (hover over the task in the Macro Canvas and click on the play button that comes up), otherwise we can choose to run the full macro by clicking on the green Run button at the right top. Once completed, click on the Data Connections tab to check the results:

Above, the AutoMap functionality was used to map all 3 source columns to the correct destination columns. Here, we will go into some more detail on manually mapping and additional options users have to quickly sort and filter the list of mappings.

In this quick start guide we will show how users can seamlessly go from using the Resource Library, Cosmic Frog and DataStar applications on the Optilogic platform to creating visualizations in Power BI. The example covers cost to serve analysis using a global sourcing model. We will run 2 scenarios in this Cosmic Frog model with the goal to visualize the total cost difference between the scenarios by customer on a map. We do this by coloring the customers based on the cost difference.
The steps we will walk through are:
Please note that DataStar is currently in the Early Adopter phase, where only users who participate in the Early Adopter program have access to it. DataStar is rapidly evolving while we work towards the General Availability release later this year. For any questions about DataStar or the Early Adopter program, please feel free to reach out to Optilogic’s support team on support@optilogic.com.
We will first copy the model named “Global Sourcing – Cost to Serve” from the Resource Library to our Optilogic account (learn more about the Resource Library in this help center article):

Now that the model is in the user’s account, it can be opened in the Cosmic Frog application:


We will only have a brief look at some high-level outputs in Cosmic Frog in this quick start guide, but feel free to explore additional outputs. You can learn more about Cosmic Frog through these help center articles. Let us have a quick look at the Optimization Network Summary output table and the map:


Our next step is to import the needed input table and output table of the Global Sourcing – Cost to Serve model into DataStar. Open the DataStar application on the Optilogic platform by clicking on its icon in the applications list on the left-hand side. In DataStar, we first create a new project named “Cost to Serve Analysis” and set up a data connection to the Global Sourcing – Cost to Serve model, which we will call “Global Sourcing C2S CF Model”. See the Creating Projects & Data Connections section in the Getting Started with DataStar help center article on how to create projects and data connections. Then, we want to create a macro which will calculate the increase/decrease in total cost by customer between the 2 scenarios. We build this macro as follows:

The configuration of the first import task, C2S Path Summary, is shown in this screenshot:

The configuration of the other import task, Customers, uses the same Source Data Connection, but instead of the optimizationcosttoservepathsummary table, we choose the customers table as the table to import. Again, the Project Sandbox is the Destination Data Connection, and the new table is simply called customers.
Instead of writing SQL queries ourselves to pivot the data in the cost to serve path summary table to create a new table where for each customer there is a row which has the customer name and the total cost for each scenario, we can use Leapfrog to do it for us. See the Leapfrog section in the Getting Started with DataStar help center article and this quick start guide on using natural language to create DataStar tasks to learn more about using Leapfrog in DataStar effectively. For the Pivot Total Cost by Scenario by Customer task, the 2 Leapfrog prompts that were used to create the task are shown in the following screenshot:

The SQL Script reads:
DROP TABLE IF EXISTS total_cost_by_customer_combined;
CREATE TABLE total_cost_by_customer_combined AS
SELECT
pathdestination AS customer,
SUM(CASE WHEN scenarioname = 'Baseline' THEN pathcost ELSE 0 END)
AS total_cost_baseline,
SUM(CASE WHEN scenarioname = 'OpenPotentialFacilities' THEN pathcost ELSE 0 END)
AS total_cost_openpotentialfacilities
FROM c2s_path_summary
WHERE scenarioname IN ('Baseline', 'OpenPotentialFacilities')
GROUP BY pathdestination
ORDER BY pathdestination;
To create the Calculate Cost Savings by Customer task, we gave Leapfrog the following prompt: “Use the total cost by customer table and add a column to calculate cost savings as the baseline cost minus the openpotentalfacilities cost”. The resulting SQL Script reads as follows:
ALTER TABLE total_cost_by_customer_combined ADD COLUMN cost_savings DOUBLE PRECISION;
UPDATE total_cost_by_customer_combined SET
cost_savings = total_cost_baseline - total_cost_openpotentialfacilities;
This task is also added to the macro; its name is "Calculate Cost Savings by Customer".
Lastly, we give Leapfrog the following prompt to join the table with cost savings (total_cost_by_customer_combined) and the customers table to add the coordinates from the customers table to the cost savings table: “Join the customers and total_cost_by_customer_combined tables on customer and add the latitude and longitude columns from the customers table to the total_cost_by_customer_combined table. Use an inner join and do not create a new table, add the columns to the existing total_cost_by_customer_combined table”. This is the resulting SQL Script, which was added to the macro as the "Add Coordinates to Cost Savings" task:
ALTER TABLE total_cost_by_customer_combined ADD COLUMN latitude VARCHAR;
ALTER TABLE total_cost_by_customer_combined ADD COLUMN longitude VARCHAR;
UPDATE total_cost_by_customer_combined SET latitude = c.latitude
FROM customers AS c
WHERE total_cost_by_customer_combined.customer = c.customername;
UPDATE total_cost_by_customer_combined SET longitude = c.longitude
FROM customers AS c
WHERE total_cost_by_customer_combined.customer = c.customername;We can now run the macro, and once it is completed, we take a look at the tables present in the Project Sandbox:

We will use Microsoft Power BI to visualize the change in total cost between the 2 scenarios by customer on a map. To do so, we first need to set up a connection to the DataStar project sandbox from within Power BI. Please follow the steps in the “Connecting to Optilogic with Microsoft Power BI” help center article to create this connection. Here we will just show the step to get the connection information for the DataStar Project Sandbox, which underneath is a PostgreSQL database (next screenshot) and selecting the table(s) to use in Power BI on the Navigator screen (screenshot after this one):

After selecting the connection within Power BI and providing the credentials again, on the Navigator screen, choose to use just the total_cost_by_customer_combined table as this one has all the information needed for the visualization:

We will set up the visualization on a map using the total_cost_by_customer_combined table that we have just selected for use in Power BI using the following steps:
With the above configuration, the map will look as follows:

Green customers are those where the total cost went down in the OpenPotentialFacilities scenario, i.e. there are savings for this customer. The darker the green, the higher the savings. White customers did not see a lot of difference in their total costs between the 2 scenarios. The one that is hovered over, in Marysville in Washington state, has a small increase of $149.71 in total costs in the OpenPotentialFacilities scenario as compared to the Baseline scenario. Red customers are those where the total cost went up in the OpenPotentialFacilities scenario (i.e. the cost savings are a negative number); the darker the red, the higher the increase in total costs. As expected, the customers with the highest cost savings (darkest green) are those located in Texas and Florida, as they are now being served from DCs closer to them.
To give users an idea of what type of visualization and interactivity is possible within Power BI, we will briefly cover the 2 following screenshots. These are of a different Cosmic Frog model for which a cost to serve analysis is performed too. Two scenarios were run in this model: Baseline DC and Blue Sky DC. In the Baseline scenario, customers are assigned to their current DCs and in the Blue Sky scenario, they can be re-assigned to other DCs. The chart on the top left shows the cost savings by region (= US state) that are identified in the Blue Sky DC scenario. The other visualizations on the dashboard are all on maps: the top right map shows the customers which are colored based on which DC serves them in the Baseline scenario, the bottom 2 maps shows the DCs used in the Baseline (left) and the DCs used in the Blue Sky scenario.

To drill into the differences between the 2 scenarios, users can expand the regions in the top left chart and select 1 or multiple individual customers. This is an interactive chart, and the 3 maps are then automatically filtered for the selected location(s). In the below screenshot, the user has expanded the NC region and then selected customer CZ_593_NC in the top left chart. In this chart, we see that the cost savings for this customer in the Blue Sky DC scenario as compared to the Baseline scenario amount to $309k. From the Customers map (top right) and Baseline DC map (bottom left) we see that this customer was served from the Chicago DC in the Baseline. We can tell from the Blue Sky DC map (bottom right) that this customer is re-assigned to be served from the Philadelphia DC in the Blue Sky DC scenario.

The following links provide a downloadable (excel) template describing the fields included in the output tables for Neo (Optimization), Throg (Simulation), Triad (Greenfield), and Hopper (Routing).
Anura 2.8 is the current schema.
The best way to understand modeling in Cosmic Frog is to understand the data model and structure. The following links provide a downloadable (excel) template to the documentation and explanation for every table and field in the modeling schema.
For a brief review of how to use the template file, please watch the following video.
Leapfrog helps Cosmic Frog users explore and use their model data via natural language. View data, make changes, create & run scenarios, analyze outputs, learn all about the Anura schema that underlies Cosmic Frog models, and a whole lot more!
Leapfrog combines an extensive knowledge of PostgreSQL with the complete knowledge of Optilogic’s Anura data schema, and all the natural language capabilities of today’s advanced general purpose LLMs.
For a high-level overview and short video introducing Leapfrog, please see the Leapfrog landing page on Optilogic’s website.
In this documentation, we will first get users oriented on where to find Leapfrog and how to interact with it. In the section after, Leapfrog’s capabilities will be listed out with examples of each. Next, the Tips & Tricks section will give users helpful pointers so they can get the most out of Leapfrog. Finally, we will step through the process of building, running, and analyzing a Cosmic Frog model start to finish by only using Leapfrog!
Dive in if you’re ready to take the leap!
Start using Leapfrog by opening the module within Cosmic Frog:

Once the Leapfrog module is open, users’ screens will look similar to the following screenshot:

The example prompts when using the Anura Help LLM are shown here:

When first starting to use Leapfrog, users will also see the Privacy and Data Security statement, which reads as follows:
“Leapfrog AI Training: Optilogic does not use your model data to train Leapfrog. We do collect and store conversational data so it can be accessed again by the user, as well as to understand usage patterns and areas of strength/weakness for the LLM. Included in this data: natural language input prompts, text and SQL responses, as well as feedback from users. This information is maintained by Optilogic, not shared with third parties, and all of the conversation data is subject to the data security and privacy terms of the Optilogic platform.”

This message will stay visible within Leapfrog whenever it is being used, unless user clicks on the grey cross button on the right to close the message. Once closed, the message will not be shown again while using Leapfrog.
Conversation history is stored on the platform at the user level - not in the model database - so it does not get shared when a model is shared. Note that if you are working in a Team rather than in your My Account (see documentation on Teams on the Optilogic platform here), the Leapfrog conversations you are creating will be available to the other team members when they are working with the same model.
As mentioned in the previous section, Leapfrog currently makes use of 2 large language models (LLMs): Text2SQL and Anura Help (also referred to as Anura Aficionado or A2). They will be explained in some more detail here. There is also an appendix to this documentation where for a few example personas Leapfrog questions and responses are listed, which showcases how some users may predominantly use one model, while others may switch back and forth between them. Of course, when unsure, users can try a specific prompt using both LLMs to see which provides the most helpful response.
Please note that in future users will not need to indicate which LLM they want to run a prompt against as Leapfrog will recognize which one will be most suitable to use based on the prompt.
The Text2SQL LLM combines extensive knowledge of PostgreSQL with Optilogic’s Anura data schema, and all the natural language capabilities of today’s advanced general purpose LLMs. It has been further fine-tuned on a large set of prompt-response pairs hand-crafted by supply chain modeling experts. This allows the Text2SQL model to generate SQL queries from natural language prompts.
Prompts for which it is best to use the Text2SQL model often imply an action: “Show me X”, “Add Y”, “Delete Z”, “Run scenario A”, “Create B”, etc. See also the example prompts listed when starting a new conversation and those in the Prompt Library on the Frogger Pond community.
Leapfrog responses using this model are usually actionable: run the returned SQL query to add / edit / delete data, create a scenario or model, run a scenario, geocode locations, etc.
A full list of the capabilities of both LLMs is covered in the section “Leapfrog Capabilities” further below.
Anura Help (also referred to as Anura Aficionado or A2) is a specialized assistant that leverages advanced natural language processing to help users navigate and understand the Anura schema within Optilogic's Cosmic Frog application. The Anura schema is the foundational framework powering Cosmic Frog's optimization, simulation, and risk assessment capabilities. Anura Help eliminates traditional barriers to schema understanding by providing immediate, authoritative guidance for supply chain modelers, developers, and analysts.
Anura Help’s architecture uses the Retrieval Augmented Generation (RAG) approach: based on the natural language prompt, first the most relevant documents of those in its knowledge base are retrieved (e.g. schema details or engine awareness details). Next, it uses them to generate a natural language response.
Use the Anura Help model when wanting to learn about specific fields, tables or engines in Cosmic Frog. Its core capabilities include:
Responses from Leapfrog when using the Anura Help model are text-based and generated from retrieved documents shown in the context section. This context can for example be of the category “column info” where all details for a specific field are listed.
A full list of the capabilities of both LLMs is covered in the section “Leapfrog Capabilities” further below.
The following list compares the 2 LLMs available in Leapfrog today:
Depending on the type of question, Leapfrog’s response to it can take different forms: text, links, SQL queries, data grids, and options to create models, scenarios, scenario items, groups, run scenarios, or geocode locations. We will look at several examples of questions that result in these different types of responses in this section. This is not an exhaustive list; the next section “Leapfrog Capabilities” will go through the types of prompt-response pairs Leapfrog is capable of today.
For our first question, we used the first Text2SQL example prompt “What are the top 3 products by demand?” by clicking on it. After submitting the prompt, we see that Leapfrog is busy formulating a response:

And Leapfrog’s response to the prompt is as follows:


The metadata included here are:
Clicking on the icon with 3 dots again will collapse the response metadata.
This first prompt asked a question about the input data contained in the Cosmic Frog model. Let us now look at a slightly different type of prompt, which asks to change model input:

We are going to run the SQL query of the above response to our “Increase demand by 20%” prompt. Before doing so, let’s review a subset of 10 records of the Customer Demand input table (under the Data Module, in the Input Tables section):

Next, we will run the SQL query:

After clicking the Run SQL button at the bottom of the SQL Query section in Leapfrog’s response, it becomes greyed out so it will not accidentally be run again. Hovering over the button also shows text indicating the query was already run:

Note that closing and reopening the model or refreshing the browser will revert the Run SQL button’s state so it is clickable again.
Opening the Customer Demand table again and looking at the same 10 records, we see that the Quantity field has indeed been changed to its previous value multiplied by 1.2 (the first record’s value was 643, and 643 * 1.2 = 771.6, etc.):

Running the SQL query to increase the demand by 20% directly in the master data worked fine as we just saw. However, if we do not want to change the master data, but rather want to increase the demand quantity as part of a scenario, this is possible too:


After navigating to the Scenarios module within our Cosmic Frog model, we can see the scenario and its item have been created:

Note that if desired, the scenario and scenario item names auto-generated by Leapfrog can be changed in the Scenarios module of Cosmic Frog: just select the scenario or item and then choose “Rename” from the Scenario drop-down list at the top.
As a final example of a question & answer pair in this section, let us look at one where we use the Anura Help LLM, and Leapfrog responds with text plus context:



There is a lot of information listed here; we will explain the most commonly used information:
Prompts and their responses are organized into conversations in the Leapfrog module:

Users can organize their conversations with Leapfrog by using the options from the Conversations drop-down at the top of the Leapfrog module:

Users can rate Leapfrog responses by clicking on the thumbs up (like) and thumbs down (dislike) buttons and, optionally, providing additional feedback. This feedback is used to continuously improve Leapfrog. Giving a thumbs up to indicate the response is what you expected helps reinforce correct answers from Leapfrog. When a response is not what was expected or wrong, users can help improve Leapfrog’s underlying LLMs by giving the response a thumbs down. Especially thumbs down ratings & additional feedback will be reviewed so Leapfrog can learn and become more useful all the time.
When a response is not as expected as was the case in the following screenshot, user is encouraged to click the thumbs down button:

After clicking on the Send button, the detailed feedback is automatically added to the conversation:

The next screenshot shows an example where user gave Leapfrog’s response a thumbs up as it was what user expected. This feedback can then be used by Leapfrog to reinforce correct answers. User also had the option to provide detailed feedback again, using any of the following 4 optional tags: Showcase Example, Surprising, Fun, and Repeatable Use Case. In this example, user decided not to give detailed feedback and clicked on the Close button after the detailed feedback form came up:

If you have any additional Leapfrog feedback (or questions) beyond what can be captured here, you can feel free to send an email to Leapfrog@Optilogic.com. You are also very welcome to ask questions, share your experiences, and provide feedback on Leapfrog in the Frogger Pond Community.
We will now go back to our first prompt “What are the top 3 products by demand” to explore some of the options users have when Data Grids are included in a Leapfrog response, which is the case when Leapfrog’s SQL Query response is a SELECT statement.



When clicking on the Download File button, a zip file with the name of the active Cosmic Frog model appended with an ID, is downloaded to the user’s Downloads folder. The zip contains:

After clicking on Save, following message appears beneath the Data Grid in Leapfrog’s response:

Looking in the Custom Tables section (#2 in screenshot below) of the Data module (#1 in screenshot below), we indeed see this newly created table named top3products (#3 in screenshot below) with the same contents as the Data Grid of the Leapfrog response:

If we choose to save the Data Grid as a view instead of a table, it goes as follows:

We choose Save as View and give it the name of Top3Products_View. The message that comes up once the view is created reads as follows:

Going to the Analytics module in Cosmic Frog, choosing to add a new dashboard and in this new dashboard a new visualization, we can find the top3products_view in the Views section:

We will go back to the original Data Grid in Leapfrog’s response to explore a few more options user has here:


Please note:
In this section we will list out what Leapfrog is capable of and give examples of each capability. These capabilities include (the LLM each capability applies to is listed in parentheses):
Each of these capabilities will be discussed in the following sections, where a brief description of each capability is given, several example prompts illustrating the capability are listed, and a few screenshots showing the capability are included as well. Please remember that many more example prompts can be found in the Prompt Library on the Frogger Pond community.
Interrogate input and output data using natural language. Use it to check completeness of input data, and to summarize input and/or output data. Leapfrog responds with SELECT Statements and shows a Data Grid preview as we have seen above. Export the data grid or save it as a table or view for further use, which has been covered above already too.
Example prompts:
The following 3 screenshots show examples of checking input data (first screenshot), and interrogating output data (second and third screenshot):



Tell Leapfrog what you want to edit in the input data of your Cosmic Frog model, and it will respond with UPDATE, INSERT, and DELETE SQL Statements. User can opt to run these SQL Queries to permanently make the change in the master input data. For UPDATE SQL Queries, Leapfrog’s response will also include the option to create a scenario and scenario item that will make the change, which we will focus on in the next section.
Example prompts:
The following 3 screenshots show examples changing values in the input data (first screenshot), adding records to an input table (second screenshot), and deleting records from an input table (third screenshot):



Make changes to input data, but through scenarios rather than updating the master tables directly. Prompts that result in UPDATE SQL Queries will have a Scenarios part in their responses and users can easily create a new scenario that will make the input data change by one click of a button.
Example prompts:
The following 3 screenshots show example prompts with responses from which scenarios can be created: create a scenario which makes a change to all records in 1 input table (first screenshot), create a scenario which makes a change to records in 1 input table that match a condition (second screenshot), and create a scenario that makes changes in 2 input tables (third screenshot):



The above screenshots show examples of Leapfrog responses that contain a Scenarios section and from which new scenarios and scenario items can be created by clicking on the Create Scenario button. In addition to the above, users can also use Leapfrog to manage scenarios by using prompts that specifically create scenarios and/or items and assigning specific scenario items to specific scenarios. These result in INSERT INTO SQL Statements which can then be implemented by using the Run SQL button. See the following 2 screenshots for examples of this, where 1) a new scenario is created and an existing scenario item is then assigned to it, and 2) a new scenario item is created which is then assigned to an already existing scenario:


Leapfrog can create new groups and add group members to new and existing groups. Just specify the group name and which members it needs to have in the prompt and Leapfrog’s response will be one or multiple INSERT INTO SQL Statements.
Example prompts:
The following 4 screenshots show example prompts of creating groups and group members: 1) creates a new products group and adds products that have names with a certain prefix (FG_) to it, 2) creates a new periods group and adds 3 specific periods to it, 3) creates a new suppliers group and adds all suppliers that are located in China to it, and 4) adds a new member to an existing facilities group, and in addition explicitly sets the Status and Notes field of this new record in the Groups table:




Leapfrog can create a new, blank model. Leapfrog's response will ask user to confirm if they want to create the new model before creating it. If confirmed, the response will update to contain a link which takes user to the Leapfrog module in this newly created model in a new tab of the browser.
Example prompts:
Following 2 screenshots show an example where a new model named “FrogsLeaping” is created:


You can ask Leapfrog to kick off any model runs for you. Optionally you can specify the scenario(s) you want to be run, which engine to use, and what resource size to use. For Neo (network optimization) runs, user can additionally indicate if the infeasibility check should be turned on. If no scenario(s) are specified, all scenarios present in the model will be run. If no engine is specified, the Neo engine (network optimization) will be used. If no resource size is specified, S will be used. If for Neo runs it is not specified if the infeasibility check should be turned on or off it will be off by default.
Leapfrog’s response will summarize the scenario(s) that are to be run, the engine that will be used, the resource size that will be used, and for Neo runs if the infeasibility check will be on or off. If user indeed wants to run the scenario(s) with these settings, they can confirm by clicking on the Run button. If so, the response will change to contain a link to the Run Manager application on Optilogic’s platform, which will be opened in a new tab of the browser when clicked. In the Run Manager, users can monitor the progress of any model runs.
The engines available in Cosmic Frog are:
The resource sizes available are as follows, from smallest to largest: Mini, 4XS, 3XS, 2XS, XS, S, M, L, XL, 2XL, 3XL, 4XL, Overkill. Guidance on choosing a resource size can be found here.
Example prompts:
The following 2 screenshots show an example prompt where the response is to run the model: only the scenario name is specified in which case a network optimization (Neo) is run using resource size S with the infeasibility check turned off (False):


Leapfrog's response now indicates the run has been kicked off and provides a link (click on the word "here") to check the progress of the scenario run(s) in the Run Manager.
The next screenshot shows a prompt asking to specifically run Greenfield (Triad) on 2 scenarios, where the resource size to be used is specified in the prompt too:

The last screenshot in this section shows a prompt to run a specific scenario with the infeasibility check turned on:

Leapfrog can find latitude & longitude pairs for locations (customers, facilities, and suppliers) based on the location information specified in these input tables (e.g. Address, City, Region, Country). Leapfrog’s response will ask user to confirm they want to geocode the specified table(s). If so, the response will change to contain a link which will open a Cosmic Frog map showing the locations that have been geocoded in a new tab of the browser.
Example prompts:
Notes on using Leapfrog for geocoding locations:
In the following screenshot, user asks Leapfrog to geocode customers:

As geocoding a larger set of locations can take some time, it may look like the geocoding was not done or done incompletely if looking at the map or in the Customers / Facilities / Suppliers input tables shortly after kicking off the geocoding. A helpful tool which shows the progress of the geocoding (and other tools / utilities within Cosmic Frog) is the Model Activity list:


Leapfrog can teach users all about the Anura schema that underlies Cosmic Frog models, including:
Example prompts:
The following 4 screenshots show examples of these types of prompts & Leapfrog’s responses: 1) ask Leapfrog to teach us about a specific field on a specific table, 2) find out which table to use for a specific modelling construct, 3) understand the SCG to Cosmic Frog’s Anura mapping for a specific field on a specific table, and 4) ask about breaking changes in the latest Anura schema update:




Anura Help provides information around system integration, which includes:
Example prompts:
The following 4 screenshots show examples of these types of prompts & Leapfrog’s responses: 1) ask which tables are required to run a specific engine, 2) find out which engines use a specific table, 3) learn which table contains a certain type of outputs, and 4) ask about availability of template models for a specific purpose:




Leapfrog knows about itself, Optilogic, Cosmic Frog, the Anura database schema, LLM’s, and more. Ask Leapfrog questions so it can share its knowledge with you. For most general questions both LLMs will generate the same or a very similar answer, whereas for questions that are around capabilities, each may only answer what is relevant to it.
Example prompts:
The following 5 screenshots show examples of these types of prompts & Leapfrog’s responses: 1) ask both LLMs about their version, 2) ask a general question about how to do something in Cosmic Frog (Text2SQL), 3) ask Anura Help for the Release Notes, and 4 & 5) ask both LLMs about what they are good at and what they are not good at:





Even though this documentation and Leapfrog example prompts are predominantly in English, Leapfrog supports many languages so users can ask questions in their most natural language. Where the Leapfrog response is in text form, it can respond in the language the question was asked in. Other response types like a standard message with a link or names of scenarios and scenario items will be in English.
The following 3 screenshots show: 1) a list of languages Leapfrog supports, 2) a French prompt to increase demand by 20%, and 3) a Spanish prompt asking Leapfrog to explain the Primary Quantity UOM field on the Model Settings table:



To get the most out of Leapfrog, please take note of these tips & tricks:


After this is turned on, you can start using it by pressing the keyboard’s Windows key + H. A bar with a microphone which shows messages like “initializing”, “listening”, “thinking” will show up at the top of your active monitor:

Now you can speak into your computer’s microphone, and your spoken words will be turned into text. If you put your cursor in Leapfrog’s question / prompt area, click on the microphone in the bar at the top so your computer starts listening, and then say what you want to ask Leapfrog, it will appear in the prompt area. You can then click on the send icon to submit your prompt / question to Leapfrog.
The following screenshots show several examples of how one can build on previous prompts and responses and try to re-direct Leapfrog as described in bullets 6 and 7 of the Tips & Tricks above. In the first example user wants to delete records from an input table whereas Leapfrog’s initial response is to change the Status of these records to Exclude. The follow-up prompt clarifies that user wants to remove them. Note that it is not needed to repeat that it is about facilities based in the USA, which Leapfrog still knows from the previous prompt:

In the following example shown in the next 3 screenshots, user starts by asking Leapfrog to show the 2 DCs with the highest throughput. The SQL query response only looks at Replenishment flows, but user wants to include Customer Fulfillment flows too. Also, the SQL Query does not limit the list to the top 2 DCs. In the follow-up prompt the user clarifies this (“Like that, but…”) without needing to repeat the whole question. However, Leapfrog only picks up on the first request (adding the Customer Fulfillment flows), so in the third prompt user clarifies further (again: “Like that, but…”), and achieves what they set out to do:



In the next 2 screenshots we see an example of first asking Leapfrog to show outputs that meet certain criteria (within 3%), and then essentially wanting to ask the same question but with the criteria changed (within 6%). There is no need to repeat the first prompt, it suffices to say something like “How about with [changed criteria]?”:


When Leapfrog only does part of what a user intends to do, it can often still be achieved in multiple steps. See the following screenshots where user intended to change 2 fields on the Production Count Constraints table and initially Leapfrog only changes one. The follow-up prompt simply consists of “And [change 2]”, building on the previous prompt. In the third prompt user was more explicit in describing the 2 changes and then Leapfrog’s response is what user intended to achieve:


Here we will step through the process of building a complete Cosmic Frog demo model, creating an additional scenario, running this new scenario and the Baseline scenario, and interrogating some of the scenarios’ outputs, all by using only Leapfrog.
Please note that if you are trying the same steps using Leapfrog in your Cosmic Frog:
We will first list the prompts that were used to build, run, and analyze the model, and then review the whole process step-by-step through (lots of!) screenshots. Here is the list of prompts that were submitted to Leapfrog (all of them used the Text2SQL LLM):
And here is the step-by-step process shown through screenshots, starting with the first prompt given to Leapfrog to create a new empty Cosmic Frog model with the name “US Distribution”:

Clicking on the link in Leapfrog’s response will take user to the Leapfrog module in this newly created US Distribution model:


In the next prompt (the third one from the list), distribution center (DC) and manufacturing (MFG) locations are added to the Facilities table, and customer locations to the Customers table. Note the use of a numbered list to help Leapfrog break the response up into multiple INSERT INTO statements:

After running the SQL of that Leapfrog response, user has a look in the Facilities and Customers tables and notices that as expected all Latitude and Longitude values are blank:


Since all Facilities and Customers have blank Latitudes and Longitudes, our next (fourth) prompt is to geocode all sites:

Once the geocoding completes (which can be checked in the Model Activity list), user clicks on one of the links in the Leapfrog response. This opens the Supply Chain map of the model in a new tab in the browser, showing Facilities and Customers, which all look to be geocoded correctly:

We can also double-check this in the Customers and Facilities tables, see for example next screenshot of a subset of 5 customers which now have values in their Latitude and Longitude fields:

For a (network optimization - Neo) model to work, we will also need to add demand. As this is an example/demo model, we can use Leapfrog to generate random demand quantities for us, see this next (fifth) prompt and response:

After clicking the Run SQL button, we can have a look in the Customer Demand input table, where we find the expected 200 records (50 customers which each have demand for 4 products) and eyeballing the values in the Quantity field we see the numbers are as expected between 10 and 1000:

Our sixth prompt sets the model end and start dates, so the model horizon is all of 2025:

Again, we can double-check this after running the SQL response by having a look in the Model Settings input table:

We also need Transportation Policies, the following prompt (the seventh from our list) takes care of this and creates lanes from all MFGs to all DCs and from all DCs to all customers:

We see the 6 enumerated MFG (2 locations) to DC (3 locations) lanes when opening and sorting the Transportation policies table, plus the first few records of the 150 enumerated DC to customer lanes. No Unit Costs are set so far (blank values):

Our eighth prompt sets the transportation unit costs on the transportation policies created in the previous step. All use a unit of measure of EA-MI which means the costs entered are per unit per mile, and the cost itself is 1 cent on MFG to DC lanes and 2 cents on DC to customer lanes:

Clicking the Run SQL button will run the 4 UPDATE statements, and we can see the changes in the Transportation Policies input table:

In order to run, the model also needs Production Policies, which the next (ninth) prompt takes care of: both MFG locations can produce all 4 products:

Again, double-checking after running the SQL from the response, we see the 8 expected records in the Production Policies input table:

Our 3 DCs have an upper limit as to how much throughput they can handle over the year, this is 50,000 for the DCs in Reno and Memphis and 100,000 for the DC in Jacksonville. Prompt number 10 sets these:

We can see these numbers appear in the Throughput Capacity field on the Facilities input table after running the SQL of Leapfrog’s response:

We want to explore what happens if the maximum throughput of the DC in Memphis is increased to 100,000; this is what the eleventh prompt asks to do:

Leapfrog’s response has both a SQL UPDATE query, which would change the throughput at DC_Memphis in the Facilities input table, and a Scenarios section. We choose to click on the Create Scenario button so a new scenario is created (Increase Memphis DC Capacity) which will contain 1 scenario item (set_dc_memphis_capacity_to_100000) that sets the throughput capacity at DC_Memphis to 100,000:

Our small demo model is now complete, and we will use Leapfrog (using our twelfth prompt) to run network optimization (using the Neo engine) on the Baseline and Increase Memphis DC Capacity scenarios:

While the scenarios are running, we are thinking about what will be interesting outputs to review, and ask Leapfrog about how one can compare customers flows between scenarios (prompt number 13):

This information can come in handy in one of the next prompts to direct Leapfrog on where to look.
Using the link from the previous Leapfrog response where we started the optimization runs for both scenarios, we open the Run Manager in a new tab of the browser. Both scenarios have completed successfully as their State is set to Done:

Looking in the Optimization Network Summary output table, we also see there are results for both scenarios:

In the next few prompts Leapfrog is used to look at outputs of the 2 scenarios that have been run. The prompt (number 14 from our list) in the next screenshot aims to get Leapfrog to show us which customers have a different source in the Increase Memphis DC Capacity scenario as compared to the Baseline scenario:

Leapfrog’s response is almost what we want it to be, however it has duplicates in the Data Grid. Therefore, we follow our previous prompt up with the next one (number 15), where we ask to see only distinct combinations. Instead of “distinct” we could have also used the word “unique” in our prompt:

We see that the source for around 11-12 customers changed from the DC in Jacksonville in the Baseline to the DC in Memphis in the Increase Memphis DC Capacity scenario.
Cost comparisons between scenarios are usually interesting too, so that is what prompt number 16 asks about:

We notice that increasing the throughput capacity at DC_Memphis leads to a lower total supply chain cost by about 56.5k USD. Next, we want to see how much flow has shifted between the DCs in the Baseline scenario compared to the Increase Memphis DC Capacity scenario, which is what the last prompt (number 17) asks about:

This tells us that the throughput at DC_Reno is the same in both scenarios, but that increasing the DC_Memphis throughput capacity allows a shift of about 24k units from the DC in Jacksonville to the DC in Memphis (which was at its maximum 50k throughput in the Baseline scenario). This volume shift is what leads to the reduction in total supply chain cost.
We hope this gives you a good idea of what Leapfrog is capable of today. Stay tuned for more exciting features to be added in future releases!
Do you have any Leapfrog questions or feedback? Feel free to use the Frogger Pond Community to ask questions, share your experiences, and provide feedback. Or, shoot us an email at Leapfrog@Optilogic.com.
Happy Leapfrogging!
PERSONA: Alex is an experienced supply chain modeler who knows exactly what to analyze but often spends too much time pulling and formatting outputs. They are looking to be more efficient in summarizing results and identifying key drivers across scenarios. While confident in their domain expertise, they want help extracting insights faster without losing control or accuracy. They see AI as a time-saving partner that helps them focus on decision-making, not data wrangling.
USE CASE: After running multiple scenarios in Cosmic Frog, Alex wants to quickly understand the key differences in cost and service across designs. Instead of manually exporting data or writing SQL queries, Alex uses Leapfrog to ask natural-language questions which saves Alex hours and lets them focus on insight generation and strategic decision-making.
Model to use: Global Supply Chain Strategy (available under Get Started Here in the Explorer).
Prompt #1

Prompt #2

Prompt #3
Prompt #4
Prompt #5

PERSONA: Chris is an experienced supply chain modeler with a well-established, repeatable workflow that pulls data from internal systems to rebuild models every quarter. He relies on consistency in the model schema to keep his automation running smoothly.
USE CASE: With an upcoming schema change in Cosmic Frog, Chris is concerned about disruptions or errors in his process and wants Leapfrog to help provide info on the changes that may require him to update his workflow.
Model to use: any.
Prompt #1
Prompt #2
Prompt #3
PERSONA: Larry Loves LLMs – he wants to use an LLM to find answers.
USE CASE: I need to understand the outputs of this model someone else built. I want to know how many products come from each supplier for each network configuration. Can Leapfrog help with that?
Yes, Leapfrog can help with that! Let's use Anura Help to better understand which tables have that data and then ask Text2SQL to pull the data.
Model to use: any; Global Supply Chain Strategy (available under Get Started Here in the Explorer).
Prompt #1
Prompt #2

In this quick start guide we will walk-through importing a CSV file into the Project Sandbox of a DataStar project. The steps involved are:
Our example CSV file is one that contains historical shipments from May 2024 through August 2025. There are 42,656 records in this Shipments.csv file, and if you want to follow along with the steps below you can download a zip-file containing it here.
Please note that DataStar is currently in the Early Adopter phase, where only users who participate in the Early Adopter program have access to it. DataStar is rapidly evolving while we work towards the General Availability release later this year. For any questions about DataStar or the Early Adopter program, please feel free to reach out to Optilogic’s support team on support@optilogic.com.
Open the DataStar application on the Optilogic platform and click on the Create Data Connection button in the toolbar at the top:

In the Create Data Connection form that comes up, enter the name for the data connection, optionally add a description, and select CSV Files from the Connection Type drop-down list:

If your CSV file is not yet on the Optilogic platform, you can drag and drop it onto the “Drag and drop” area of the form to upload it to the /My Files/DataStar folder. If it is already on the Optilogic platform or after uploading it through the drag and drop option, you can select it in the list of CSV files. Note that you can filter this list by typing in the Search box to quickly find the desired file. Once the file is selected, clicking on the Add Connection button will create the CSV connection:

After creating the connection, the Data Connections tab on the DataStar start page will be active, and it shows the newly added CSV connection at the top of the list (note the connections list is shown in list view here; the other option is card view):

You can either go into an existing DataStar project or create a new one to set up a Macro that will import the data from the Historical Shipments CSV connection we just set up. For this example, we create a new project by clicking on the Create Project button in the toolbar at the top when on the start page of DataStar. Enter the name for the project, optionally add a description, change the appearance of the project if desired by clicking on the Edit button, and then click on the Add Project button:

After the project is created, the Projects tab will be shown on the DataStar start page. Click on the newly created project to open it in DataStar. Inside DataStar, you can either click on the Create Macro button in the toolbar at the top or the Create a Macro button in the center part of the application (the Macro Canvas) to create a new macro which will then be listed in the Macros tab in the left-hand side panel. Type the name for the macro into the textbox:

When a macro is created, it automatically gets a Start task added to it. Next, we open the Tasks tab by clicking on the left-most of the three tabs in the panel on the right-hand side of the macro canvas. Click on Import and drag it onto the macro canvas:

The Import task will then be placed on the macro canvas, and automatically the Configuration tab in the right-hand side panel will be opened. Here user can enter the name for the task, select the data connection that is the source for the import (the Historical Shipments CSV connection), and the data connection that is the destination of the import (a new table named “rawshipments” in the Project Sandbox):

Connect the Import Raw Shipments task to the Start task by clicking on the connection point in the middle of the right edge of the Start task, holding the mouse down and dragging the connection line to the connection point in the middle of the left edge of the Import Raw Shipments task. Next, we can test the macro that has been set up so far by running it: either click on the Run button all the way to the right in the toolbar at the top of DataStar or click on the Run button in the Logs tab at the bottom of the macro canvas.

You can follow the progress of the Macro run in the Logs tab and once finished examine the results on the Data Connections tab. Expand the Project Sandbox data connection to open the rawshipments table by clicking on it. A preview of the table of up to 10,000 records will be displayed in the central part of DataStar:

Please feel free to download the Cosmic Frog Python Library PDF file. Please note that this library requires Python 3.11.
You can also reference the video shown below that covers an overview on scripting within Cosmic Frog.
When demand fluctuates due to for example seasonality, it can be beneficial to manage inventory dynamically. This means that when the demand (or forecasted demand) goes up or down, the inventory levels go up or down accordingly. To support this in Cosmic Frog models, inventory policies can be set up in terms of days of supply (DOS): for example for the (s,S) inventory policy, the Simulation Policy Value 1 UOM and Simulation Policy Value 2 UOM fields can be set to DOS. Say for example that reorder point s and order up to quantity S are set to 5 DOS and 10 DOS, respectively. This means that if the inventory falls to or below the level that is the equivalent of 5 days of supply, a replenishment order is placed that will order the amount of inventory to bring the level up to the equivalent of 10 days of supply. In this documentation we will cover the DOS-specific inputs on the Inventory Policies table, how a day of supply equivalent in units is calculated from these and walk through a numbers example.
In short, using DOS lets users be flexible with policy parameters; it is a good starting point for estimating/making assumptions about how inventory is managed in practice.
Note that it is recommended you are familiar with the Inventory Policies table in Cosmic Frog already before diving into the details of this help article.
The following screenshot shows the fields that set the simulation inventory policy and its parameters:

For the same inventory policy, the next screenshot shows the DOS-related fields on the Inventory Policies table; note that the UOM fields are omitted in this screenshot:

As mentioned above, when using forecasted demand for the DOS calculations, this forecasted demand needs to be specified in the User Defined Forecasts Data and User Defined Forecasts tables, which we will discuss here. This next screenshot shows the first 15 example records in the User Defined Forecasts Table:

Next, the User Defined Forecasts table lets a user configure the time-period to which a forecast is aggregated:

Let us now explain how the DOS calculations work for different DOS settings through the examples shown in the next screenshot. Note that for all these examples the DOS Review Period First Time field has been left blank, meaning that the first 1 DOS equivalent calculation occurs at the start of this model (on January 1st) for each of these examples:

Now that we know how to calculate the value of 1 DOS, we can apply this to inventory policies which use DOS as their UOM for the simulation policy value fields. We will do a numbers example with the one shown in the screenshot above (in the Days of Supply Settings section) where reorder point s is 5 DOS and order up to quantity S is 10 DOS. Let us assume the same settings as in the last example for the 1 DOS calculations in the screenshot above, explained in bullet #6 above: forecasted demand is used with a 10 day DOS Window, a 5 day DOS Leadtime, and a 5 day DOS Review Period, so the calculations for the equivalent of 1 DOS are the numbers in the last row shown in the screenshot, which we will use in our example below. In addition to this, we will assume a 2 day Review Period for the inventory policy, meaning inventory levels are checked every other day to see if a replenishment order needs to be placed. DC_1 also has 1,000 units of P1 on hand at the start of the simulation (specified in the Initial Inventory field):

Teams is an exciting new feature set on the Optilogic platform designed to enhance collaboration within Supply Chain Design, enabling companies to foster a more connected and efficient working environment. With Teams, users can join a shared workspace where all team members have seamless access to collective models and files. For a more elaborate introduction to and high-level overview of the Teams feature set, please see this “Getting Started with Optilogic Teams” help center article.
This guide will walk Administrators through the steps to set up their organization and create Teams within the Optilogic platform. For non-administrator users, there is also an “Optilogic Teams – User Guide” help center article available.
To begin, reach out to Optilogic Support at support@optilogic.com and let them know you would like to create your company’s Organization. Once they respond, they will ask you two key questions:
These questions help us determine who should have access to the Organization Dashboard, where organization administrators (“Org Admins”) can manage users, create Teams, invite Members, and more. Specifying your company’s domains also enables us to pre-populate a list of potential users—saving you time by not having to invite each colleague individually.
Once this information is confirmed, our development team will create your organization. When complete, you will be able to log in and begin using the Teams functionality.
If you have been assigned as an Organization Administrator, you can access the Organization Dashboard from the dropdown menu under your username in the top-right corner of the Optilogic platform. Click your name, then select Teams Admin from the list:

This will take you to your Organization Dashboard, where you can manage Teams and their Members.
We will first look at the Teams application within the Organization Dashboard. Here, all the organization’s teams are listed and can be managed. It will look similar to the following screenshot:


In List View format, the Teams application looks as follows and the same sections of the team edit form mentioned in the above bullets can be opened by clicking on different parts of the team’s record in the list:

In the Members application, all the organization’s members are listed, and they can be managed here:

The following diagram gives an overview of the different roles users can have when using Optilogic Teams:

From the Organization Dashboard, while in the Teams application, click the Create Team button (as seen in the screenshots in the “Teams Application for Admins” section above) to start building a new team. The Create New Team form will come up:


Once a new team is created, members will gain access to the team. If it is their first team, a new application called Team Hub will appear in their list of applications on the Optilogic platform:

Learn how to use the Team Hub application and about switching between teams and your own My Account in the “Optilogic Teams – User Guide”.
Org Admins can change existing teams by clicking on them in the Teams application while in the Organization Dashboard. Depending on where you click on the team’s card, one of 4 sections of the Edit Team form will be shown, as was also mentioned in the “Teams Application for Org Admins” section further above. When clicking on the name of the Team, the General section is shown:

The following screenshot shows the confirmation message that comes up in case an Org Admin clicks on the Delete Team button. If they want to go ahead with the removal of the team, they can click on the Delete button. Otherwise, the Cancel button can be used to not delete the Team at this time.

The second section in the Edit Team form concerns the members of the team:

In the third section of the Edit Team form the team’s appearance can be edited:

The fourth and last part of the Edit Team form is the Invites section:

Org Admins can add new users to the organization and/or to teams by clicking on the Invite Users button while in the Members application on the Organization Dashboard. The top part of the form that comes up (next screenshot), will be used to for example add a contractor who will help out your organization for an extended period of time – they become part of the organization and can be added to multiple teams:

In the second part of this form, people can be invited to a specific team without adding them to the overall organization; these are called Team-only users:


When someone has been emailed an invite to join a team, the email will look similar to the one in the following screenshot:

User can click on the “Click here” link to accept the invite. More on the next steps for a user to join a team can be found in the “Optilogic Teams – User Guide” help center article.
Roles of existing organization members and the teams they are part of can be updated by clicking on the team member in the list of Members:


In the Teams section of this form Org Admins can update which team(s) the member is part of and what role they have in those teams:

For Team-only members (people who are part of 1 or multiple specific teams, but who are not part of the Organization), a third section named “Invites” will be available on this form:

As a best practice, it is recommended to perform regular housekeeping (for example weekly) on your organization’s teams and their members, and your organization’s members. This will prevent situations like a previous employee or temporary consultant still having access to sensitive team contents.
A user with an Org Admin role can also be part of any of the organization’s teams and work inside those or their own My Account workspace. To leave the Organization Dashboard and get back to the Optilogic platform and its applications, they can click on their name at the right top of the organization dashboard and choose “Open Optilogic Platform” from the list:

Here the Admin user can start using the Team Hub application and work collaboratively in teams, the same way as other non-Admin users do. The “Optilogic Teams – User Guide” help center article documents this in more detail.
Once you have set up your teams and added content, you are ready to start collaborating and unlocking the full potential of Teams within Optilogic!
Let us know if you need help along the way—our support team (support@optilogic.com) has your back.
DataStar is Optilogic’s new AI-powered data product designed to help supply chain teams build and update models & scenarios and power apps faster & easier than ever before. It enables users to create flexible, accessible, and repeatable workflows with zero learning curve—combining drag-and-drop simplicity, natural language AI, and deep supply chain context.
Today, up to an estimated 80% of a modeler's time is spent on data—connecting, cleaning, transforming, validating, and integrating it to build or refresh models. DataStar shrinks that time by up to 50%, enabling teams to:
The 2 main goals of DataStar are 1) ease of use, and 2) effortless collaboration, these are achieved by:
DataStar is currently in the Early Adopter (EA) phase and is rapidly evolving while we work towards a General Availability release later this year. Therefore, this documentation will be regularly updated as new functionality becomes available. If you are interested in learning more about DataStar or the Early Adopter program, please contact the Optilogic support team at support@optilogic.com.
In this documentation, we will start with a high-level overview of the DataStar building blocks. Next, creating projects and data connections will be covered before diving into the details of adding tasks and chaining them together into macros, which can then be run to accomplish the data goals of your project.
Before diving into more details in later sections, this section will describe the main building blocks of DataStar, which include Data Connections, Projects, Macros, and Tasks.
As DataStar is currently in the Early Adopter phase, this document will be updated regularly as more features become available. In this section, references to future capabilities which are not yet released are included in order to paint the larger picture of how DataStar will work. In the text it is made clear which parts are and which are not yet available in the first Early Adopter release.
Since DataStar is all about working with data, Data Connections are an important part of DataStar. These enable users to quickly connect to and pull in data from a range of data sources. Data Connections in DataStar:
Projects are the main container of work within DataStar. Typically, a Project will aim to achieve a certain goal by performing all or a subset of importing specific data, then cleansing, transforming & blending it, and finally publishing the results to another file/database. The scope of DataStar Projects can vary greatly, think for example of following 2 examples:
Projects consist of one or multiple macros which in turn consist of 1 or multiple macros and/or tasks. Tasks are the individual actions or steps which can be chained together within a macro to accomplish a specific goal. In future, multiple macros can also be chained together in another macro in order to run a larger process. Tasks are split into the following 3 categories in DataStar:
The next screenshot shows an example Macro called Shipments which consists of 7 individual tasks that are chained together to create transportation policies for a Cosmic Frog model from imported Shipments and Costs data. As a last step, it also runs the model with the updated transportation policies:

Note that not all tasks to build a macro like this are yet available in the current Early Adopter version of DataStar.
Every project by default contains a Data Connection named Project Sandbox. This data connection is not global to all DataStar projects; it is specific to the project it is part of. The Project Sandbox is a Postgres database where users generally import the raw data from the other data connections into, perform transformations in, save intermediate states of data in, and then publish the results out to a Cosmic Frog model (which is a data connection different than the Project Sandbox connection). It is also possible that some of the data in the Project Sandbox is the final result/deliverable of the DataStar Project or that the results are published into a different type of file or system that is set up as a data connection rather than into a Cosmic Frog model.
The next diagram shows how Data Connections, Projects, and Macros relate to each other in DataStar:

For the remainder of this document, only current Early Adopter DataStar functionality is shown in the screenshots (with a few exceptions, which will be noted in the text). The text mostly just covers current functionality and will at times reference features which will be included in future DataStar versions. Within DataStar, users may notice buttons, options in drop-down and right-click menus that have been disabled (greyed out or cannot be clicked on), since new functionality is being worked on continuously. These will be enabled over time and other new features will also gradually be added.
On the start page of DataStar user will be shown the existing projects and data connections. They can be opened, or deleted here, and users also have the ability to create new projects and data connections on this start page.
The next screenshot shows the existing projects in card format:

New projects can be created by clicking on the Create Project button in the toolbar at the top of the DataStar application:

The next screenshot shows the Data Connections that have already been set up in DataStar in list view:

New data connections can be created by clicking on the Create Data Connection button in the toolbar at the top of the DataStar application:

The remainder of the Create Data Connection form will change depending on the type of connection that was chosen as different types of connections require different inputs (e.g. host, port, server, schema, etc.). In our example, the user chooses CSV Files as the connection type:

In our walk-through here, the user drags and drops a Shipments.csv file from their local computer on top of the Drag and drop area:

Now let us look at a project when it is open in DataStar. We will first get a lay of the land with a high-level overview screenshot and then go into more detail for the different parts of the DataStar user interface:

Next, we will dive a bit deeper into a macro:

The Macro Canvas for the Customers from Shipments macro is shown in the following screenshot (note that the Export task shown is not yet available in the Early Adopter release):

In addition to the above, please note following regarding the Macro Canvas:

We will move on to covering the 3 tabs on the right-hand side pane, starting with the Tasks tab:

Users can click on a task in the tasks list and then drag and drop it onto the macro canvas to incorporate it into a macro.
When adding a new task, it needs to be configured, which can be done on the Configuration tab. When a task is newly dropped onto the Macro Canvas its Configuration tab is automatically opened on the right-hand side pane. To make the configuration tab of an already existing task active, click on the task in the Macros tab on the left-hand side pane or click on the task in the Macro Canvas. The configuration options will differ by type of task, here the Configuration tab of an Import task is shown as an example:

Please note that:
Leapfrog in DataStar (aka D*AI) is an AI-powered feature that transforms natural language requests into executable DataStar tasks. Users can describe what they want to accomplish in plain language, and Leapfrog automatically generates the corresponding task or SQL query without requiring technical coding skills or manual inputs for task details. This capability enables both technical and non-technical users to efficiently manipulate data, build Cosmic Frog models, and extract insights through conversational interactions with DataStar.
Note that there are 2 appendices at the end of this documentation where 1) details around Leapfrog in DataStar's current features & limitations are covered and 2) Leapfrog's data usage and security policies are summarized.

Leapfrog’s response to this prompt is as follows:

DROP TABLE IF Exists customers;
CREATE TABLE customers AS
SELECT destination_store AS customer, AVG(destination_latitude) AS latitude, AVG(destination_longitude) AS longitude FROM rawshipments
GROUP BY destination_store
Finally, we will have a look at the Conversations pane:

Within a Leapfrog conversation, Leapfrog remembers the prompts and responses thus far. User can therefore build upon previous questions, for example by following up with a prompt along the lines of “Like that, but instead of using a cutoff date of August 10, 2025, use September 24, 2025”.
Additional helpful Leapfrog in DataStar links:
Users can run a Macro by selecting it and then clicking on the green Run button at the right top of the DataStar application:

Please note that:

Next, we will cover the Logs tab at the bottom of the Macro Canvas where logs of macros that are running/have been run can be found:

When a macro has not yet been run, the Logs tab will contain a message with a Run button, which can also be used to kick off a macro run. When a macro is running or has been run, the log will look similar to the following:

The next screenshot shows the log of an earlier run of the same macro where the first task ended in an error:

The progress of DataStar macro and task runs can also be monitored in the Run Manager application where runs can be cancelled if needed too:

Please note that:
In the Data Connections tab on the left-hand side pane the available data connections are listed:

Next, we will have a look at what the connections list looks like when the connections have been expanded:

The tables within a connection can be opened within DataStar. They are then displayed in the central part of DataStar where the Macro Canvas is showing when a macro is the active tab.
Please note: currently, a data preview of up to 10,000 records for a table is displayed for tables in DataStar. This means that any filtering or sorting done on tables larger than 10k records is done on this subset of 10k records. At the end of this section it is explained how datasets containing more than 10k records per table can be explored by using the SQL Editor application.

A table can be filtered based on values in one or multiple columns:


Columns can be re-ordered and hidden/shown as described in the Appendix; this can be done using the Columns fold-out pane too:

Finally, filters can also be configured from a fold-out pane:

Users can explore the complete dataset of connections with tables larger than 10k records in other applications on the Optilogic platform, depending on the type of connection:
Here is how to find the database and table(s) of interest on SQL Editor:

Here are a few additional links that may be helpful:
We hope you are as excited about starting to work with DataStar as we are! Please stay tuned for regular updates to both DataStar and all the accompanying documentation. As always, for any questions or feedback, feel free to contact our support team at support@optilogic.com.
The grids used in DataStar can be customized and we will cover the options available through the screenshot below. This screenshot is of the list of CSV files in user's Optilogic account when creating a new CSV File connection. The same grid options are available on the grid in the Logs tab and when viewing tables that are part of any Data Connections in the central part of DataStar.

Leapfrog's brainpower comes from:
All training processes are owned and managed by Optilogic — no outside data is used.
When you ask Leapfrog a question:
Your conversations (prompts, answers, feedback) are stored securely at the user level.
Showing supply chains on maps is a great way to visualize them, to understand differences between scenarios, and to show how they evolve over time. Cosmic Frog offers users many configuration options to customize maps to their exact needs. In this documentation we will cover how to create and configure maps in Cosmic Frog.
In Cosmic Frog, a map represents a single geographic visualization composed of different layers. A layer is an individual supply chain element such as a customer, product flow, or facility. To show locations on a map, these need to exist in the master tables (e.g. Customers, Facilities, and Suppliers) and they need to have been geocoded (see also the How to Geocode Locations section in this help center article). Flow based layers are based on output tables, such as the OptimizationFlowSummary or SimulationFlowSummary and to draw these, the model needs to have been run so outputs are present in these output tables.
Maps can be accessed through the Maps module in Cosmic Frog:

The Maps module opens and shows the first map in the Maps list; this will be the default pre-configured “Supply Chain” map for maps the user created and most models copied from the Resource Library:

In addition to what is mentioned under bullet 4 of the screenshot just above, users can also perform following actions on maps:

As we have seen in the screenshot above, the Maps module opens with a list of pre-configured maps and layers on the left-hand side:

The Map menu in the toolbar at the top of the Maps module allows users to perform basic map and layer operations:

These options from the Map menu are also available in the context menu that comes up when right-clicking on a map or layer in the Maps list.
The Map Filters panel can be used to set scenarios for each map individually. If users want to use the same scenario for all maps present in the model, they can use the Global Scenario Filter located in the toolbar at the top of the Maps module:

Now all maps in the model will use the selected scenario, and the option to set the scenario at the map-level is disabled.
When a global scenario has been set, it can be removed using the Global Scenario Filter again:

The zoom level, how the map is centered, and the configuration of maps and their layers persist. After moving between other modules within Cosmic Frog or switching between models, when user comes back to the map(s) in a specific model, the map settings are the same as when last configured.
Now let us look at how users can add new maps, and the map configuration options available to them.

Once done typing the name of the new map, the panel on the right-hand side of the map changes to the Map Filters panel which can be used to select the scenario and products the map will be showing:

Instead of setting which scenario to use for each map individually on the Map Filters panel, user can instead choose to set a global scenario for all maps to use, as discussed above in the Global Scenario Filter section. If a global scenario is set, the Scenario drop-down on the Map Filters panel will be disabled and user cannot open it:

On the Map Information panel, users have a lot of options to configure what the map looks like and what entities (outside of the supply chain ones configured in the layers) are shown on it:

Users can choose to show a legend on the map and configure it on the Map Legend pane:

To start visualizing the supply chain that is being modelled on a map, user needs to add at least 1 layer to a map, which can be done by choosing “New Layer” from the Map-menu:

Once a layer has been added or is selected in the Maps list, the panel on the right-hand side of the map changes to the Condition Builder panel which can be used to select the input or output table and any filters on it to be used to draw the layer:

We will now also look at using the Named Filters option to filter the table used to draw the map layer:

In this walk-through example, user chooses to enable the “DC1 and DC2” named filter:

Lastly on the Named Filters option, users have the option to view a grid preview to ensure the correct filtered records are being drawn on the map:

In the next layer configuration panel, Layer Style, users can choose what the supply chain entities that the layer shows will look like on the map. This panel looks somewhat different for layers that show locations (Type = Point) than for those that show flows (Type = Line). First, we will look at a point type layer (Customers):

Next, we will look at a line type layer, Customer Flows:

At the bottom of the Layer Style pane a Breakpoints toggle is available too (not shown in the screenshots above). To learn more about how these can be used and configured, please see the "Maps - Styling Points & Flows based on Breakpoints" Help Center article.
Labels and tooltips can be added to each layer, so users can more easily see properties of the entities shown in the layer. The Layer Labels configuration panel allows users to choose what to show as labels and tooltips, and configure the style of the labels:

When modelling multiple periods in network optimization (Neo) models, users can see how these evolve over time using the map:

Users can now add Customers, Facilities and Suppliers via the map:

After adding the entity, we see it showing on the map, here as a dark blue circle, which is how the Customers layer is configured on this map:

Looking in the Customers table, we notice that CZ_Philadelphia has been added. Note that while its latitude and longitude fields are set, other fields such as City, Country and Region are not automatically filled out for entities added via the map:

In this final section, we will show a few example maps to give users some ideas of what maps can look like. In this first screenshot, a map for a Transportation Optimization (Hopper engine) model, Transportation Optimization UserDefinedVariables available from Optilogic’s Resource Library (here), is shown:

Some notable features of this map are:
The next screenshot shows a map of a Greenfield (Triad engine) model:

Some notable features of this map are:
This following screenshot shows a subset of the customers in a Network Optimization (Neo engine) model, the Global Sourcing – Cost to Serve model available from Optilogic’s Resource Library (here). These customers are color-coded based on how profitable they are:

Some notable features of this map are:
Lastly, the following screenshot shows a map of the Tariffs example model, a network optimization (Neo engine) model available from Optilogic’s Resource Library (here), where suppliers located in Europe and China supply raw materials to the US and Mexico:

Some notable features of this map are:
We hope users feel empowered to create their own insightful maps. For any questions, please do not hesitate to contact Optilogic support at support@optilogic.com.
Users of the Optilogic platform can easily access all files they have in their Optilogic account and perform common tasks like opening, copying, and sharing them by using the built-in Explorer application. This application sits across all other applications on the Optilogic platform.
This documentation will walk users through how to access the Explorer, explain its folder and file structure, how to quickly find files of interest, and how to perform common actions.
By default, the Explorer is closed when users are logged into the Optilogic platform, they can open it at the top of the applications list:

Once the Explorer is open, your screen will look similar to the following screenshot:

This next screenshot shows the Explorer when it is open while the user is working inside the workspace of one of the teams they are part of, and not in their My Account workspace:

When a new user logs into their Optilogic account and opens the Explorer, they will find there are quite a few folders and files present in their account already. The next screenshot shows the expanded top-level folders:


As you may have noticed already, different file types can be recognized by the different icons to the left of the file’s name. The following table summarizes some of the common file types users may have in their accounts, shows the icon used for these in the Explorer, and indicates which application the file will be opened in when (left-)clicking on the file:
*When clicking on files of these types, the Lightning Editor application will be opened and a message stating that the file is potentially unsupported will be displayed. Users can click on a “Load Anyway” button to attempt to load the file in the Lightning Editor. If the user chooses to do so, the file will be loaded, but the result will usually be unintelligible for these file types.
Some file types can be opened in other applications on the Optilogic platform too. These options are available from the right-click context menus, see the “Right-click Context Menus” section further below.
Icons to the right of names of Cosmic Frog models in the Explorer indicate if the model is a shared one and if so, what type of access the user / team has to it. Hovering over these icons will show text describing the type of share too.

Learn more about sharing models and the details of read-write vs read-only access in the “Model Sharing & Backups for Multi-user Collaboration in Cosmic Frog” help center article.
While working on the Optilogic platform, additional files and folders can be created in / added to a user’s account. In this section we will discuss which applications create what types of files and where in the folder structure they can be found in the Explorer.
The Resource Library on the Optilogic platform contains example Cosmic Frog models, Cosmic Frog for Excel Apps, Python scripts, reference data, utilities, and additional tools to help make Optilogic platform users successful. Users can browse the Resource Library and copy content from there to their own account to explore further (see the “How to use the Resource Library” help center article for more details):

Please note that previously, files copied from the Resource Library were placed in a different location in users’ accounts and not in the Resource Library folder and its subfolders. The old location was a subfolder with the resource’s name under the My Files folder. Users who have been using the Optilogic platform for a while will likely still see this file structure for files copied from the Resource Library before this change was made.
Users can create new Cosmic Frog models from Cosmic Frog’s start page (see this help center article); these will be placed in a subfolder named “Cosmic Frog Models”, which sits under the My Files folder:

DataStar users may upload files to use with their data connections through the DataStar application (see this help center article). These uploaded files are then placed in a subfolder named DataStar, which sits under the My Files folder:

When working with any of the Cosmic Frog for Excel Apps (see also this help center article), the working files for these will be placed in subfolders under the My Files folder. These are named “z Working Folder for … App”:

In addition to the above-mentioned subfolders (Resource Library, Cosmic Frog Models, DataStar, and “z Working Folder for … App” folders) which are often present under the My Files top-level folder in a user’s Optilogic account, there are several other folders worth covering here:
Now that we have covered the folder and file structure of the Explorer including the default and common files and folders users may find here, it is time to cover how users can quickly find what they need using the options towards the top of the Explorer application.
There is a free type text search box at the top of the Explorer application, which users can use to quickly find files and folders that contain the typed text in their names:

There is a quick search option to find all Cosmic Frog models in the user’s account:

Users can create share links for folders in their Optilogic account to send a copy of the folder and all its contents to other users. See this “Folder Sharing” section in the “Model Sharing & Backups for Multi-User Collaboration in Cosmic Frog” help center article on how to create and use share links. If a user has created any share links for folders in their account, these can be managed by clicking on the View Share Links icon, which is the second of the 5 to the right of / underneath the search box:

When browsing through the files and folders in your Optilogic account, you may collapse and expand quite a few different folders and their subfolders. Users can at times lose track of where the file they had selected is located. To help with this, users have the “Expand to Selected File” option available to them:


In addition to using the Expand to Selected File option, please note that switching to another file in the Lightning Editor by for example clicking on the Periods.csv file will further expand the Explorer to show that file in the list too. If needed, the Explorer will also automatically scroll up or down to show the active file in the center of the list.
If you have many folders and subfolders expanded, it can be tedious to collapse them all one by one again. Therefore, users also have a “Collapse All” option at their disposal when working with the Explorer. The following screenshot shows the state of the Explorer before clicking on the Collapse All icon, which is the 4th of the 5 icons to the right of / underneath the Search box:

The user then clicks on the Collapse All icon and the following screenshot shows the state of the Explorer after doing so:

Note that the Collapse All icon has now become inactive and will remain so until any folders are expanded again.
Sometimes when deleting, copying, or adding files or folders to a user’s account, these changes may not be immediately reflected in the Explorer files & folders list as they may take a bit of time. The last one of the 5 icons to the right of / underneath the Search box provides users a “Refresh Files” option. Clicking on this icon will update the files and folders list such that all the latest are showing in the Explorer:

In this final section of the Explorer documentation, we will cover the options users have from the context menus that come up when right-clicking on files and folders in the Explorer. Screenshots and text will explain the options in the context menus for folders, Cosmic Frog models, text-based files, and all other files.
When right-clicking on a folder in the Explorer, users will see the following context menu come up (here the user right-clicked on the Model Library folder):

The options from this context menu are, from top to bottom:
Note that right-clicking on the Get Started Here folder gives fewer options: just the Copy (with the same 3 options as above), Share Link, and Delete Folder options are available for this folder.
Now, we will cover the options available from the context menu when right-clicking on different types of files, starting with Cosmic Frog models:

The options, from top to bottom, are:
Please note that the Cosmic Frog models listed in the Explorer are not actual databases, but pointer files. These are essentially empty placeholder files to let users visualize and interact with models inside the Explorer. Due to this, actions like downloading are not possible; working directly with the Cosmic Frog model databases can be done through Cosmic Frog or the SQL Editor.
When right-clicking on a Python script file, the following context menu will open:

The options, from top to bottom, are:
The next 2 screenshots show what it looks like when comparing 2 text-based files with each other:


Other text-based files, such as those with extensions of .csv, .txt, .md and .html have the same options in their context menus as those for Python script files, with the exception that they do not have a Run Module option. The next screenshot shows the context menu that comes up when right-clicking on a .txt file:

Other files, such as those with extensions of .pdf, .xls, .xlsx, .xlsm, .png, .jpg, .twb and .yxmd, have the same options from their context menus as Python scripts, minus the Compare and Run Module options. The following screenshot shows the context menu of a .pdf file:

As always, please feel free to let us know of any questions or feedback by contacting Optilogic support on support@optilogic.com.
The Anura data schema enables design in the Optilogic platform. It sits above the design algorithms to create a consistent modeling paradigm for our algorithms:

Anura is comprised of modeling categories required to build a model. Each modeling category contains tables to add pertinent detail and/or complexity to your model as well as identify structure, policies, costs, and constraints.

A complete list of the tables and fields within each table can be downloaded locally here.
Teams is an exciting new feature set on the Optilogic platform designed to enhance collaboration within Supply Chain Design, enabling companies to foster a more connected and efficient working environment. With Teams, users can join a shared workspace where all team members have seamless access to collective models and files. For a more elaborate introduction to and high-level overview of the Teams feature set, please see this “Getting Started with Teams” help center article.
This guide will cover how to use and take advantage of the Teams functionality on the Optilogic Platform.
For organization administrators (Org Admins), there is an “Optilogic Teams – Administrator Guide” help center article available. The Admin guide details how Org Admins can create new Teams & change existing ones, and how they can add new Members and update existing ones.
When your organization decides to start using the Teams functionality on the Optilogic platform, they will appoint one or multiple users to be the organizations’s administrators (Org Admin) who will create the Teams and add Members to these teams. Once an Org Admin has added you to a team, you will see a new application called Team Hub when logged in on the Optilogic platform. You will also receive a notification on the Optilogic platform about having been added to a team:

Note that it is possible to invite people from outside an organization to join one of your organization’s teams. Think for example of granting access to a contractor who is temporarily working on a specific project that involves modelling in Cosmic Frog. An Org Admin can invite this person to a specific team, see the “Optilogic Teams – Administrator Guide” help center article on how to do this. If someone is invited to join a team, and they are not part of that organization, they will receive an email invitation to the team. The following screenshots show this from the perspective of the user who is being invited to join a team of an organization they are not part of.
The user will receive an email similar to the one shown below. In this case the user is invited to the “Onboarding” team.

Clicking on the “Click here” link will open a new browser tab where user can confirm to join the team they are invited to by clicking on the Join Team button:

After clicking on the Join Team button, user will be prompted to login to the Optilogic platform or to create an account if they do not have one already. Once logged in, they are part of the team they were invited to and they will see the Team Hub application (see next section).
They will also see a notification in their Optilogic account:

Clicking on the notifications bell icon at the top right of the Optilogic platform will open the notifications list. There will be an entry for the invite the user received to join the Onboarding team.
Should an Org Admin have deleted the invitation before the user accepts the invite, they will get the message “Failed to activate the invite” when clicking on the Join Team button:

The Team Hub is a centralized workspace where users can view and switch between the teams they belong to. At its core, Team Hub provides team members with a streamlined view of their team’s activity, resources, and members. When first opening the Team Hub application, it may look similar to the following screenshot:

Next, we will have a look at the team card of the Cosmic Frog Team:


Note that changing the appearance of a team changes it not just for you, but for all members of the team.
When clicking on a team or My Account in the Team Hub, user will be switching into that team and all the content will be that of the team. See also the next section “Content Switching with Team Hub” where this is explained in more detail. When switching between teams or My Account, first the resources of the team you are switching to will be loaded:

Once all resources are loaded, user can click on the Close button at the bottom or wait until it automatically closes after a few seconds. We will first have a look at what the Team Hub looks like for My Account, the user’s personal account, and after that also cover the Team Hub contents of a team.

The overview of a team in the Team Hub application can look similar to following screenshot:

Note that as a best practice, users can start using the team’s activity feed instead of written / verbal updates from team members to understand the details of who worked on what when.
One of the most important features of the Team Hub application is its role as a content switcher. By default, when you log into the Optilogic platform, you’ll see only your personal content (My Account)—similar to a private workspace or OneDrive.
However, once you enter Team Hub and select a specific team, the Explorer automatically updates to display all files and databases associated with that team. This team context extends across the entire Optilogic platform. For example, if you navigate to the Run Manager, you’ll only see job runs associated with the selected team.
By switching into a team, all applications and data within the platform are scoped to that team. We will illustrate this with the following screenshots where user has switched to the team named “Cosmic Frog Team”.


Besides the “Cosmic Frog Team” team, this user is also part of the Onboarding team, which they have now switched to using the Team Hub application. Next, they open Resource Library application:

Note that it is best practice to return to your personal space in My Account when finished working in a Team, to ensure workspace content is kept separate and files are not accidentally created in/added to the wrong team.
Once an organization and its teams are set up, the next step is to start populating your teams with content. Besides adding content by copying from the Resource Library as seen in the last screenshot above, there are two primary ways to add models or files to a team.
Navigate to the Team Hub and switch into your team space. From here, you can create new files, upload existing ones, or begin building new models directly within the team. Keep in mind that any files or models created within a team are visible to all team members and can be modified by them. If you have content that you would prefer not to be accessed or edited by others, we recommend either labeling it clearly or creating it within your personal My Account workspace.

When user is in a specific team (Cosmic Frog Team here), they can add content through the Explorer (expand by clicking on the greater than icon at the top left on the Optilogic Platform): right clicking in the Explorer brings up a context menu with options to create new files, folders, and Cosmic Frog Models, and to upload files. When using these options, these are all created in / added to the active team.
You can also quickly add content to your team by using Enhanced Sharing. This feature allows you to easily select entire teams or individual team members to share content with. When you open the share modal and click into the form, you’ll see intelligent suggestions—teams you belong to and members from your organization—appear automatically. Simply click on the teams or users listed to autofill the form. To learn more about the different ways of sharing content and content ownership, please see the “Model Sharing & Backups for Multi-User Collaboration” help center article.
Please note that, regardless of how a team’s content has been created/added:
Once you have been added to any teams and have added content, you are ready to start collaborating and unlocking the full potential of Teams within Optilogic!
Let us know if you need help along the way—our support team (support@optilogic.com) has your back.
Optilogic introduces the Lumina Tariff Optimizer – a powerful optimization engine that empowers companies to reoptimize supply chains in real-time to reduce the effects of tariffs. It provides instant clarity on today’s evolving tariff landscape, uncovers supply chain impacts, and recommends actions to stay ahead – now and into the future.
Manufacturers, distributors, and retailers around the world are faced with an enormous task trying to keep up with changing tariff policies and their supply chain impact. With Optilogic’s Lumina Tariff Optimizer, companies can illuminate their path forward by proactively designing tariff mitigation strategies that automatically consider the latest tariff rates.
With Lumina Tariff Optimizer, Optilogic users can stay ahead of tariff policy and answer critical questions to take swift action:
The following 7-minute video gives a great overview of the Lumina Tariff Optimizer tools:
Optilogic’s Lumina Tariff Optimization engine can be leveraged by modelers within Cosmic Frog or be leveraged within a Cosmic Frog for Excel app for other stakeholders across the business to evaluate the tariff impact to their end-to-end supply chain. Optilogic enables users to get started quickly with Lumina with several items in the Resource Library that include:
This documentation will cover each of these Lumina Tariff Optimizer tools, in the same order as listed above.
The first tool in the Lumina Tariff Optimizer toolset is the Tariffs example model which users can copy to their own account from the Resource Library. We will walk through this model, covering inputs and outputs, with emphasis on how to specify tariffs and their impact on the optimal solution when running network optimization (using the Neo engine) on the scenarios in the model.
Let us start by looking at the map of the Tariffs model, which is showing the model locations and flows for the Baseline scenario:

This model consists of the following sites:
Next, we will have a look at the Products table:

As mentioned above, raw materials RM1, RM2, and RM3 are supplied by Chinese suppliers and the others 6 raw materials by European suppliers, which we can confirm by looking at the Supplier Capabilities input table:

The Bills Of Materials input table shows that each finished good takes 3 of the Raw Materials to be manufactured; the Quantity field indicates how much of each is needed to create 1 unit of finished good:

Looking at the Production Policies input table, we see that both the US and Mexico factory can produce Consumables, but Rockets are only manufactured in Mexico and Space Suits only in the US:

To understand the outputs later, we also need to briefly cover the Flow Constraints input table, which shows that the El Bajio Factory in Mexico can at a maximum ship out 3.5M units of finished goods (over all products and the model horizon together):

To enter tariffs and take them into account in a network optimization (Neo) run, users need to populate the new Tariffs input table:

There are also 2 new Neo output tables that will be populated when tariffs are included in the model, the Optimization Path Flow Summary and the Optimization Tariff Summary tables:

Tariffs can be specified at multiple levels in Cosmic Frog, so users can choose the one that fits their modeling needs and available data best:
In order to model tariffs from/to a region or country, these fields need to be populated in the Customers, Facilities, and Suppliers tables:

In the Tariffs input table, all path origin location (furthest upstream) – path destination location (furthest downstream) – product combinations to which tariffs need to be applied are captured. There can be any number of echelons in between the path origin location and path destination location where the product flows through. Consider the following path that a raw material takes:

The raw material is manufactured/supplied from China (the path origin), it then flows through a location in Vietnam, then through a location in Mexico, before ending its path in the USA (the path destination, where it is consumed when manufacturing a finished good). In this case the tariff that is set up for this raw material with path origin = China, and path destination = USA will be applied. The tariff will be applied to the segment of the path where the product arrives in the region / country of its final destination. In the example here, that is on last leg (/lane / segment) of the path, e.g. on the Mexico to USA lane.
If we have a raw material that takes the same initial path, except it ends in Mexico to be consumed in a finished good, then the tariff that is set up for this raw material with path origin = China and path destination = Mexico will be applied. To continue from this example: then if this finished good manufactured in Mexico is shipped to the US and sold there, and if there is a path with a tariff set up from Mexico to USA for the finished good, then that tariff will be applied (path origin = Mexico, path destination = USA). I.e. in this last example the entire path is just the 1 segment between Mexico and USA.
So, now we will look how this can be set up in the Tariffs input table:

Please note:
Three scenarios were run in the Tariffs example model:

Now, we will look at the outputs for these 3 scenarios, first at a higher level and later on, we will dig into some details of how the tariff costs are calculated as well.
The Financials stacked bar chart in the standard Optimization Scenario Comparison dashboard in the Analytics module of Cosmic Frog can be used to compare all costs for all 3 scenarios in 1 graph:

To compare the Tariffs by path origin – path destination and product, a new “Optimization Tariffs Summary” dashboard was created. We will look at the Baseline New Tariffs scenario first, and the Optimized New Tariffs scenario next:


Note that in the Appendix it is explained how this chart can be created.
Next, we will take a closer look at some more detailed outputs. Starting with how much demand there is in the model for Rockets and Consumables, the 2 finished goods the Mexican factory in El Bajio can manufacture. The next screenshot shows the Optimization Demand Summary network optimization output table, filtered for Rockets and with a summation aggregation applied to it to show the total demand for Rockets at the bottom of the grid:

Next, we change the filter to look at the Consumables product:

In conclusion: the demand for Rockets is nearly 3.5M units and for Consumables nearly 10.5M. Rockets can only be produced in Mexico whereas Consumables can be produced by both factories. From the charts above we suspected a shift in production from US to Mexico for the Consumables finished good in the Optimized New Tariffs scenario, which we can confirm by looking at the Optimization Production Summary output table:

Since the production of Consumables requires raw materials RM1, RM2, and RM3, we expect to see the above production quantities for Consumables to be reflected in the amount of these raw materials that was moved from the suppliers in China to the US vs to Mexico. We can see this in the Optimization Flow Summary network optimization output table, which is filtered for the 2 scenarios with new tariffs, Port to Port lanes, and these 3 raw materials:

The custom Optimization Tariff Summary and Optimization Path Flow Summary output tables are automatically generated after running a network optimization on a model with a populated Tariffs table. The first of these 2 is shown in the next screenshot where we have filtered out the raw materials RM1, RM2, and RM3 again, plus also the Consumables finished good for the 2 scenarios that use the new tariffs:

Where the Optimization Tariff Summary output table summarizes the tariffs at the scenario - path origin – path destination – product level, the Optimization Path Flow Summary output table gives some more detail around the whole path, and on which segments the tariffs are applied. The next 2 screenshots show 6 records of this output table for the Tariffs example model:

For the 2 scenarios that use the new tariffs, records are filtered out for raw material RM1 where the Path Start Location represents the CN region and the Path End Location represents the MX region. These Path Start and End Locations are automatically generated based on the Path Origin Property and Value and Destination Property and Value set in the Tariffs input table. Scrolling right for these 6 records:

We see that the path for RM1 is the same in both scenarios: originate at location Guangzhou in China, moved to Shanghai Port (CN), from Shanghai Port moved to Altamira Port (MX), and from Altamira Port moved to the El Bajio Factory (MX). The calculations of the Tariff Cost based on the Flow Quantity are the same as explained above, and we see that the tariffs are applied on the second segment where the product arrives in the region / country of its final destination.
Wondering where to go from here? If you are wanting to start using tariffs in your own models, but are not exactly sure where to start, please see the “Cosmic Frog Utilities to Create the Tariffs Table” section further below, which also includes step-by-step instructions based on what data you have available.
In the next section, we will first discuss how quick sensitivity analyses around tariffs can be run using a Cosmic Frog for Excel App.
To enable Cosmic Frog users, and also managers and executives with no or limited knowledge of Cosmic Frog, to run quick sensitivity scenarios around changing tariffs, Optilogic has developed an Excel Application for this specific purpose. Users can connect to their Cosmic Frog model that contains a populated Tariffs input table and indicate which tariffs to increase/decrease by how much, run network optimization with these changed tariffs, and review the optimization tariff summary output table, all in 1 Excel workbook. Users can download this application and related files from the Resource Library.
The following represents a typical workflow when using the Tariffs Rapid Optimizer application:


For users to take advantage of the power of the Lumina Tariff Optimizer they will want to create their own network optimization model which includes a populated Tariffs input table (see also the “Tariffs Model – Tariffs Table” section earlier in this documentation). Depending on the data available to the user, populating the Tariffs input table can be a straightforward task or a difficult one in case no or little data around tariffs is known/available within the organization. Optilogic has developed 3 utilities to help users with this task. The utilities are available from within Cosmic Frog, which will be covered in this section of the documentation, and they are also available through the Cosmic Frog for Excel Tariffs Builder App, which will be covered in the next section. Here follows a short description of each utility, they will each be covered in more detail later in this section:
In Cosmic Frog, they are accessible from the Utilities module (click on the 3 horizontal bars icon at the top left in Cosmic Frog to open the Module menu drop-down and select Utilities):

The utilities are listed under System Utilities > Tariff.
The latter 2 utilities hook into Avalara APIs, and users need to use / obtain their own Avalara API keys for each to be able to use these utilities from within Cosmic Frog or the Tariffs Builder Excel App.
The following list shows the recommended steps for users with varying levels of Tariffs data available to them from least to most data available (assuming an otherwise complete Cosmic Frog model has been built):
To populate the Tariffs table with all possible path origin – path destination – product combinations, based on the contents of the Transportation Policies input table, use this first utility:

Consider a small model with 1 customer in the US, 2 facilities (1 DC and 1 factory) both in the US, 1 supplier in China, and 2 products (1 finished good and 1 component):




After running the 1 Generate Tariff Paths utility (using Region as the data to use for the path origin and path destination), the Tariffs table is generated and populated as shown in the next 2 screenshots:

All combinations for path origin region, path destination region, and product have been added to the Tariffs table. Scrolling further right, we see the remaining fields of this table:

To update the HS Code field in the Tariffs table, we can use the second utility:

Users can find the full path of a file uploaded to their Optilogic account as follows:

The file containing the product master data needs to have the same columns as shown in the next screenshot:

Note that columns B-F contain information of products that do not match the product name in Cosmic Frog as this is just an example to show how the utility works.
After running the 2 HS Code Classification utility, we see that the HS Code field in the Tariffs table is now populated:

To use the HS Code field to next look up duty rates we can use the third utility:

After running the 3 Lookup Duty Rates utility, we see that the Duty Rate field in the Tariffs table is now populated:

The raw output from the API is placed in the Duty Rate field and user needs to update this so that the field contains just a number representing the total duty rate. For the second record (US region to China region for product RM), the total duty rate is 35% (25% + 10%), and user needs to enter 35 in this field. For the third record (China region to US region for product Rockets), the duty rate is 27.5% (7.5% + 20%), and user needs to enter 27.5 in this field. For the fourth record (China region to US region for product RM), the total duty rate is 25%, and user needs to enter 25 in this field.
When running a utility in Cosmic Frog, user can track the progress in the Model Activity window:

The 3 utilities covered in the previous section to generate and populate the Tariffs input table are also made available in the Cosmic Frog for Excel Tariffs Builder App, which we will cover in this section. Users can download this application and related files from the Resource Library.
The following represents a typical workflow when using this Tariffs Builder application:

The next screenshot shows the Tariffs table after just running the Build Tariff workflow (bullet 4 in the list above):

The next screenshot shows the Product Master worksheet which contains the product information to be used by the HS Code Classification workflow, it needs to be in this format and users should enter as much product information in here as possible:

After also running the HS Code Classification and the Duty Rate Lookup workflows (bullets 6 and 7 in the list further above), we see that these fields are now also populated on the Tariffs worksheet:

We hope users feel empowered to take on the challenging task of incorporating tariffs into their optimization workflows. For any questions, please do not hesitate to contact Optilogic support on support@optilogic.com.
In this appendix we will show users how to create a stacked bar chart for each path origin – path destination pair, showing the tariff costs by product.
In the Analytics drop-down menu in the toolbar while in the Analytics module of Cosmic Frog, select New Dashboard, give it a name (e.g. Optimization Tariff Summary), then click on the blue Visualization button on the top right to create a new chart for the dashboard. In the New Visualization configuration form that comes up, type “tariff” in the Tables Search box, then check the box for the Optimization Tariff Summary table in the list, and click on Select Data.

To create the OD Path calculated field, click on the plus icon at the top right of the Fields list and select Calculated Field which brings up the Edit Calculated Field configuration window:

For a detailed walk-through of all map features, please see the "Getting Started with Maps" Help Center article.
You can edit and filter the information shown in a map layer by utilizing the Layer Menu on the right hand side of your map screen. This menu opens when a layer is selected and contains the following tabs:
From the layer style menu we can modify how the layer will appear on the map

From the layer label menu we can add text descriptors the map layer.

The “Labels” dropdown menu allow you to add a text descriptor next to a layer item. Only one item may be selected in the label dropdown.

The “Tooltip” is a floating box that appears when hovering over a layer element. We can add/remove items displayed in the tooltip by selecting in the “Tooltips” menu.

The Condition Builder menu allows you to edit the data included in your layer. The most important item is to select the Table Name using the drop down menu that you wish to show on the map. This can include point items like Customers or Facilities or transportation arcs like OptimizationFlowSummary.

Within the Condition Builder you can use conditions to filter the table further to a subset of the data in the table. The filter syntax is similar to the syntax used when creating scenarios.
For a detailed walk-through of all map features, please see the "Getting Started with Maps" Help Center article.
You can edit and filter the base data located with a map using the Map Filter menu. This menu opens when the map name is highlighted or selected. From here you are able to select the following items to be shown in the map:
Note: leaving the product filter blank will include all products in the model


The SQL Editor helps users write, edit, and execute SQL (Structured Query Language) queries within Optilogic’s platform. It provides direct access to database objects such as tables and views stored within the platform. In this documentation, the Anura Supply Chain Model Database (Cosmic Frog’s database) will be used as the database example.
Anura model exploration and editing are enabled through the three windows of the SQL Editor:

The Anura database is stored in PostgreSQL and exclusively supports PostgreSQL query statements to ensure optimized performance. Visit https://www.postgresql.org/ for more detailed information.
To enable the SQL editor, select a table or view from a database. Once selected, the SQL Editor will prepopulate a Select query, and the Metadata Explorer displays the table schema to enable initial data exploration.

The Database Browser offers several tools to explore your databases and display key information.

The Query Editor enables users to create and execute custom SQL queries and view the results. Reserved words are highlighted in blue to assist in SQL editing. This window is not enabled until a model table or view has been selected from the database browser; once selected, the user is able to customize this query to run in the context of the selected database.

The Metadata Explorer provides a set of tools to efficiently create and store SQL queries.

SQL is a powerful language that allows you to manipulate and transform tabular data. The query basics overview will help guide you through creating basic SQL queries.

Example 1: Filter Criteria - Customers with status set to include without latitude
SELECT A.CustomerName, A.Status, A.Region
FROM customers A
Where A.Latitude IS NOT NULL and A.Status = ‘Include’
Example 2: Summarizing Records - Regions with 2 or more geocoded customers
SELECT A.Region, A.Status, Count(*) AS Cnt
FROM customers A
Where A.Latitude IS NOT NULL
Group By A.Region, A.Status
Having Count(*) > 1
Order by Cnt DescOften, your model analysis will require you to use data stored in more than one table. To include multiple tables in a single SQL query, you will have to use table joins to list the tables and their relationships.
If you are unsure if all joined values are present in both tables, leverage a Left or Right join to ensure you don’t unintentionally exclude records.

Example 1: Inner Join - Join Customer Demand and Customers to add Region to Demand
SELECT A.CustomerName, A.ProductName, B.Region, A.Quantity
FROM customerdemand A INNER JOIN Customers B
on A.CustomerName = B.CustomerName
Example 2: Left Join - Find Customer Demand records missing Customer record
SELECT A.CustomerName, A.ProductName, B.Region, A.Quantity
FROM customerdemand A Left JOIN Customers B
on A.CustomerName = B.CustomerName
Where B.CustomerName is Null
Example 3: Inner Join & Aggregation – Summarize Demand by Region
SELECT B.Region, A.ProductName, SUM(Cast (A.Quantity as Int)) Quantity
FROM customerdemand A INNER JOIN Customers B
on A.CustomerName = B.CustomerName
Group By B.Region, A.ProductNameWhen data is separated into two or more tables due to categorical differences in the data, a join won’t work because there is a common structure, not a relationship between the tables. A UNION is a type of join that allows you to merge the results of two separate table queries into a single unified output. Ensure each query has the same number of columns in the same order.
Example 1: UNION – Create a unified view of all customers and facilities that are geocoded
SELECT A.CustomerName as SiteName, A.City, A.Region, A.Country, A.Latitude, A.Longitude, 'Cust' as Type
FROM customers A
UNION
SELECT B.FacilityName as SiteName, B.City, B.Region, B.Country,B.Latitude, B.Longitude, 'Facility' as Type
FROM Facilities BAs queries grow in complexity, it is often easiest to reset the table references by creating a sub-query. A sub-query allows you to create a new virtual table and reference this abbreviated name and structure as you build out a query in phases.
Example 1: Subquery +UNION – Create a unified view of all customers and facilities that are geocoded
SELECT C.SiteName, C.city, C.Region, C.Country, C.Latitude, C.Longitude, C.Type
FROM (
SELECT A.CustomerName as SiteName, A.City, A.Region, A.Country, A.Latitude, A.Longitude, 'Cust' as Type
FROM customers A
UNION
SELECT B.FacilityName as SiteName, B.City, B.Region, B.Country,B.Latitude, B.Longitude, 'Facility' as Type
FROM Facilities B
) C
WHERE C.Latitude IS NOT NULLAs data tables grow, it is often more efficient to use a table filter to find missing values than a left join and null filter criteria.
Example 1: Table Search Filter – CustomerDemand without a Customer match
SELECT * FROM customerdemand A
WHERE NOT EXISTS (SELECT B.CustomerName FROM Customers B WHERE A.CustomerName = B.CustomerName)The Analytics module in Cosmic Frog allows you to display data from tables and views. Custom queries can be stored as views, enabling the analytics module to reference this virtual table to display results. Creating a view follows a very similar query construct as a sub-query, but rather than layering in a select statement, you add CREATE VIEW viewname as ( query).
Once created, a view can be selected with the Analytics module of Cosmic Frog.

Example 1: Create View – Creating an all-site view
CREATE VIEW V_All_Sites as
(
SELECT C.SiteName, C.city, C.Region, C.Country, C.Latitude, C.Longitude, C.Type
FROM (
SELECT A.CustomerName as SiteName, A.City, A.Region, A.Country, A.Latitude, A.Longitude, 'Cst' as Type
FROM customers A
UNION
SELECT B.FacilityName as SiteName, B.City, B.Region, B.Country,B.Latitude, B.Longitude, 'Fac' as Type
FROM Facilities B
) C
)
Example 2: Delete View – Delete V_all_sites view
Drop VIEW v_all_sitesSQL queries can also modify the contents and structure of data tables. This is a powerful capability, and the results, if improperly applied, cannot be undone.
Table updates & modifications can be completed within Cosmic Frog, with the added benefit of the context of allowed column values. This can also be done within the SQL editor by executing UPDATE and ALTER TABLE SQL statements.
Example 1: Modifying Tables – Adding Additional Notes Columns
ALTER TABLE Customers
ADD Notes_1 character varying (250)
Example 2: Modifying Values – Updating Notes Columns
UPDATE Customers
SET Notes_1 = CONCAT(Country , '-' , Region)
Example 3: Modifying Tables – Delete New Notes Columns
ALTER TABLE Customers
DROP COLUMN Notes_1
Example 4: Copying Tables – Copy Customers Table
SELECT *
INTO Customers_1
FROM Customers
Example 5: Deleting Tables – Delete Customers Table
DROP TABLE Customers_1
DROP TABLE Customers_1
Visit https://www.postgresqltutorial.com/ for more information on PostgreSQL query syntax.
In this quick start guide we will show how Leapfrog AI can be used in DataStar to generate tasks from natural language prompts, no coding necessary!
Please note that DataStar is currently in the Early Adopter phase, where only users who participate in the Early Adopter program have access to it. DataStar is rapidly evolving while we work towards the General Availability release later this year. For any questions about DataStar or the Early Adopter program, please feel free to reach out to Optilogic’s support team on support@optilogic.com.
This quick start guide builds upon the previous one where a CSV file was imported into the Project Sandbox, please follow the steps in there first if you want to follow along with the steps in this quick start. The starting point for this quick start is therefore a project named Import Historical Shipments that has a Historical Shipments data connection of type = CSV, and a table in the Project Sandbox named rawshipments, which contains 42,656 records.
The Shipments.csv file that was imported into the rawshipments table has following data structure (showing 5 of the 42,656 records):

Our goal in this quick start is to create a task using Leapfrog that will use this data (from the rawshipments table in the Project Sandbox) to create a list of unique customers, where the destination stores function as the customers. Ultimately, this list of customers will be used to populate the Customers input table of a Cosmic Frog model. A few things to consider when formulating the prompt are:
Within the Import Historical Shipments DataStar project, click on the Import Shipments macro to open it in the macro canvas, you should see the Start and Import Raw Shipments tasks on the canvas. Then open Leapfrog by clicking on the Leapfrog tab, which is the right-most tab in the panel on the right-hand side of the macro canvas. Alternatively, click on the “How can I help you” speech bubble with the frog icon in the toolbar at the top of DataStar. Next, we will write our prompt in the “Write a message…” textbox.

Keeping in mind the 5 items mentioned above, the prompt we use is the following: “Use the raw shipments table to create unique customers (the destination column); average the latitudes and longitudes. Only use records with the ship date between July 1 2024 and June 30 2025. Match to the anura schema of the Customers table”. Please note that tips for writing successful prompts and variations of this prompt (some working, others not) can be found in the Tips for Prompt Writing section further below.
After clicking on the send icon to submit the prompt, Leapfrog will take a few seconds to consider the prompt and formulate a response. The response will look similar to the following screenshot, where we see from top to bottom:

The resulting SQL Script reads:
DROP TABLE IF EXISTS Customers;
CREATE TABLE Customers AS
SELECT Destination_Store AS CustomerName, AVG(Destination_Latitude) AS Latitude, AVG(Destination_Longitude) AS Longitude
FROM RawShipments
WHERE TO_DATE(Ship_Date, 'DD/MM/YYYY') BETWEEN '2024-07-01' AND '2025-06-30'
GROUP BY Destination_Store;
Those who are familiar with SQL, will be able to tell that this will indeed achieve our goal. Since that is the case, we can click on the Add to Macro button at the bottom of Leapfrog’s response to add this as a Run SQL task to our Import Shipments macro. When hovering over this button, you will see Leapfrog suggests where to put it on the macro canvas and to connect it to the Import Raw Shipments task, which is what we want. When next clicking on the Add to Macro button it will be added.

We can test our macro so far, by clicking on the green Run button at the right top of DataStar. Please note that:
Once the macro is done running, we can check the results. Go to the Data Connections tab, expand the Project Sandbox connection and click on the customers table to open it in the central part of DataStar:

We see that the customers table resulting from running the Leapfrog-created Run SQL task contains 1,333 records. Also notice that its schema matches that of the Customers table of Cosmic Frog models, which includes columns named customername, latitude, and longitude.
Writing prompts for Leapfrog that will create successful responses (e.g. the SQL Script generated will achieve what the prompt-writer intended) may take a bit of practice. This Mastering Leapfrog for SQL Use Cases: How to write Prompts that get Results post on the Frogger Pond community portal has some great advice which applies to Leapfrog in DataStar too. It is highly recommended to give it a read; the main points of advice follow here too:
As an example, let us look at variations of the prompt we used in this quick start guide, to gauge the level of granularity needed for a successful response. In this table, the prompts are listed from least to most granular:
Note that in the above prompts, we are quite precise about table and column names and no typos are made by the prompt writer. However, Leapfrog can generally manage well with typos and often also pick up table and column names when not explicitly used in the prompt. So while generally being more explicit results in higher accuracy, it is not necessary to always be extremely explicit and we just recommend to be as explicit as you can be.
Python models can be run on your “local” IDE instance or you can leverage the power of hyper-scaling by running the module as a Job. When running as a job you have access to a number of different machine configurations as follows:
For reference, a typical laptop will be equivalent to either a XS or S machine configuration.
Complex optimization models may require more CPU cores to solve quickly and large scale simulations may require the use of more RAM due to increase in data required for model fidelity.
Finding problems with any Cosmic Frog model’s data has just become easier with the release of the Integrity Checker. This tool scans all tables or a selected table in a model and flags any records with potential issues. Field level checks to ensure fields contain the right type of data or a valid value from a drop-down list are included, as are referential integrity checks to ensure the consistency and validity of data relationships across the model’s input tables.
In this documentation we will first cover the Integrity Checker tool’s scope, how to run it, and how to review its results. Next, we will compare the Integrity Checker to other Cosmic Frog data validation tools, and we will wrap up with several tips & tricks to help users make optimal use of the tool.
The Integrity Checker extends cell validation and data entry helper capabilities to support users identify a range of issues relating to referential integrity and data types before running a model. The following types of data and referential integrity issues are being checked for when the Integrity Checker is run:

Here, we provide a high-level description for each of these 4 categories; in the appendix at the end of this help center article more details and examples for each type of check are given. From left to right:
The Integrity Checker can be accessed in two ways while in Cosmic Frog’s Data module: from the pane on the right-hand side that also contains Model Assistant and Scenario Errors or from the Grid drop-down menu. The latter is shown in the next screenshot:

*Please note that in this first version of the Integrity Checker, the Inventory Policies and Inventory Policies Multi-Time Period tables are not included in any checks the Integrity Checker performs. All other tables are.
The second way to access the Integrity Checker is, as mentioned above, from the pane on the right-hand side in Cosmic Frog:

If the Integrity Checker has been run previously on a model, opening it again will show the previous results and gives user the option to re-run it by clicking on a “Rerun Check” button which we will see in screenshots further below.
After starting the Integrity Checker in one of the 2 ways described above, a message indicating it is starting will appear in the Integrity Checker pane on the right-hand side:

While the Integrity Checker is running, the status of the run will be continuously updated, while results will be added underneath as checks on individual tables complete. Only tables which have errors in them will be listed in the results.

Once the Integrity Checker run is finished, its status changes to Completed:

Users can see the errors identified by the Integrity Checker by clicking on one of the table cards which will open the table and the Integrity Checker Errors table beneath it:

Clicking on a record in the Integrity Checker Errors table will filter the table above (here the Transportation Policies table) down to the record(s) with that error:

User can go through each record in the Integrity Checker Errors table at the bottom and filter out the associated records with the errors in the table above to review the errors and possibly fix them. In the next screenshot, user has moved onto the second record in the Integrity Checker Errors table:

We will look at one more error, the one that was found on the Products table:

Finally, the following screenshot shows what it looks like when the Integrity Checker was run on an individual table and in the case no errors are found:

There are additional tools in Cosmic Frog which can help with finding problems in the model’s data and overall construction, the table below gives an overview of how these tools compare to each other to help users choose the most suitable one for their situation:
Please take note of the following so you can make optimal use of the Integrity Checker capabilities:


We saw the next diagram further above in the Integrity Checker Scope section. Here we will expand on each of these categories and provide examples.

From left to right:
Note that the numeric and data type checks sound similar, but they are different: a value in a field can pass the data type check (e.g. a double field contains the value -2000), but not the numeric check (a latitude field can only contain values between -90 and 90, so -2000 would be invalid).
We hope you will find the Integrity Checker to be a helpful additional tool to facilitate your model building in Cosmic Frog! For any questions, please contact Optilogic support on support@optilogic.com.
When modeling supply chains, stakeholders are often interested in understanding the cost to serve specific customers and/or products for segmentation and potential reposition purposes. In order to calculate the cost to serve, variable costs incurred all along the path through the supply chain need to be aggregated, while fixed costs that are incurred need to be apportioned to the correct customer/product. This is not always straightforward and easy to do, think for example of multi-layer BOMs.
When running a network optimization (using the Neo engine) in Cosmic Frog, these cost to serve calculations are automatically done, and the outputs are written into three output tables. In this help center article, we will cover these output tables and how the calculations underneath to populate them work.
First, we will briefly cover the example model used for screenshots for most of this help center article, then we will cover the 3 cost to serve output tables in some detail, and finally we will discuss a more complex example that uses detailed production elements too.
There are 3 network optimization output tables which contain cost to serve results; they can be found in the Output Summary Tables section of the output tables list:

We will cover the contents and calculations used for these tables by using a relatively simple US Distribution model, which does use quite a few different cost types in it. This model consists of:
Following screenshot shows the locations and flows of one of the scenarios on a map:

One additional important input to mention is the Cost To Serve Unit Basis field on the Model Settings input table:

User can select Quantity, Volume, or Weight. This basis is used when needing to allocate costs based on amounts of product (e.g. amount produced or moved). For example, we need to allocate $100,000 fixed operating cost at DC_Raleigh to 3 outbound flows. The result is different when we use a different cost to serve unit basis:
Note that in this documentation the Quantity cost to serve unit basis is used everywhere, which is the default.
We will cover the 3 output tables related to Cost To Serve using the above described model as an example in the screenshots. Further below in the document, we will also show an example of a model that uses some detailed production inputs and its cost to serve outputs. Let us start with examining the most detailed cost to serve output table, the Cost To Serve Path Segment Details table. This output table contains each segment of each path, from the most upstream product source location to customer fulfillment, which is the most downstream location. Each path segment represents an activity along the supply chain: this can be a production when product is made/supplied, inventory that is held, a flow where product is moved from one location to another, or use of a bill of materials when raw materials are used to create intermediates or finished goods.
Note that these output tables are very wide due to the number of columns they contain. In the screenshots, columns are often re-ordered, and some may be hidden if they do not contain any values or are not the topic of discussion, so they may not look exactly the same as what you see in your Cosmic Frog. Also, grids are often sorted to show records in increasing order of path ID and/or segment sequence.

Please note that several columns are not shown in the above screenshot, these include:
Removing some additional columns and scrolling right, we see a few more columns for these same paths:

We will stay with the example of the path from MFG_Detroit to DC_Jacksonville to CZ_Hartford for 707 units of P1_Bullfrog to explain the costs (Path ID = 1 in the above). Here is a visual of this path, shown on the map:

The following 4 screenshots show the 3 segments of this path in the Optimization Cost To Serve Path Segment Details output table, and the different costs that are applied to each segment. In this first screenshot, the left most column is the Segment Sequence, and a couple of fields that were not present in the screenshots above are now shown:




Let us put these on a map again, together with the calculations:

There are 2 types of costs associated with the production of 707 units of P1_Bullfrog at MFG_Detroit:
For the flow of 707 units from MFG_Detroit to DC_Jacksonville, there are 4 costs that apply:
There are 7 different costs that apply to the flow of the 707 units of P1_Bullfrog from DC_Jacksonville to CZ_Hartford where it fulfills the customer’s demand of 707 units:
There are cost fields in the Optimization Cost To Serve Path Segment Details output table that are not shown in the above screenshots as these are all blank in the example model used. These fields and their calculations should there be inputs to calculate them are as follows:
In the above examples we have seen Segment Types of production and flow. When inventory is modelled, we will also start seeing segments with Segment Type = inventories, as shown in this next screenshot (Cosmic Frog outputs of the Optimization Cost To Serve Path Segment Details table were copied into Excel from which the screenshot was then taken):

Here, 75.41 units of product P4_Polliwog were produced at MFG_Detroit in period 2025 (segment 1), then moved to DC_Jacksonville in the same period (segment 2), where they have been put into inventory (segment 3). These units are then used to fulfill customer demand at CZ_Columbia in the next period, 2026 (segment 4).
The Optimization Cost To Serve Path Segment Details table can also contain records which have Segment Type = no_activity. On these records there is no Segment Origin – Segment Destination pair, just one location which is not being used during the period (0 throughput). This record is then used to allocate fixed operating, startup, and/or closing costs to that location.
There are still a few fields in the Optimization Cost To Serve Path Segment Details table that have not been covered so far, these are:
The Optimization Cost To Serve Path Summary table is an output table which is an aggregation of the Optimization Cost To Serve Path Segment Details table, aggregated by Path ID. In other words, the table contains 1 record for each path where all costs of the individual segments of the path are rolled up into 1 number. Therefore, this table contains many of the same fields as the Path Segment Details table, just not the segment specific ones. The next screenshot shows the record for Path ID = 1 of the Optimization Cost To Serve Path Summary table:

This output table summarizes the cost to serve at the customer-product-period level and this table can be used to create reports at the customer or product level by aggregating further.

Note that all these 3 output tables can also be used in the Analytics module of Cosmic Frog to create any cost to serve dashboards. For example, this Optimization Cost To Serve Summary output table is used in 2 standard analytics dashboards that are part of any new Cosmic Frog model and will be automatically populated when this table is populated after a network optimization (Neo) run: the “CTS Unprofitable Demand” and “CTS Customer Profitability” dashboards. Here is a screenshot of the charts included in this second dashboard:

To learn more about the Analytics module in Cosmic Frog, please see the help center articles on this page.
In the above discussion of cost to serve outputs, the example model used did not contain any detailed production inputs, such as Bills Of Materials (BOMs), Work Centers, or Processes. We will now look at an example where these are used in the model and specifically focus on how to interpret the Optimization Cost To Serve Path Segment Details output table when bills of materials are included.
Consider the following records in the Bills of Materials input table for making the finished good FG_BLU (blue cheese):

In the model, these BOMs are associated through Production Policies for the BULK_BLU and FG_BLU products.
Next, we will have a look at the Optimization Cost To Serve Path Segment Details output table to understand how costs are allocated to raw material vs finished good segments:

As we have seen before, there are many more cost columns in this table, so for each segment these can be reviewed by scrolling right. Finally, let’s look at these 3 paths in the Optimization Cost To Serve Path Summary output table:

Note that the Path Product Name is FG_BLU for these 3 paths and there are no details to indicate which raw materials each of the paths pertains to. If required, this can be looked up using the Path IDs from this table in the Optimization Cost To Serve Path Segment Details output table.
Being able to assess the Risk associated with your supply chain has increasingly become more important in a quickly changing world with high levels of volatility. Not only does Cosmic Frog calculate an overall supply chain risk score for each scenario that is run, but it also gives you details about the risk at the location and flow level, so you can easily identify the highest and lowest risk components of your supply chain and use that knowledge to quickly set up new scenarios to reduce the risk in your network.
By default, any Neo optimization, Triad greenfield or Throg simulation model run will have the default risk settings, called OptiRisk, applied using the DART risk engine. See also the Getting Started with the Optilogic Risk Engine documentation. Here we will cover how a Cosmic Frog user can set up their own risk profile(s) to rate the risk of the locations and flows in the network and that of the overall network. Inputs and outputs are covered and in the last section notes & tips & additional resources are listed.
The following diagram shows the Cosmic Frog risk categories, their components, and subcomponents.

A description of these risk components and subcomponents follows here:
Custom risk profiles are set up and configured using the following 9 tables in the Risk Inputs section of Cosmic Frog’s input tables. These 9 input tables can be divided into 5 categories:

Following is a summary of these table categories; more details on individual tables will be discussed below:
We will cover some of the individual Risk Input tables in more detail now, starting with the Risk Rating Configurations table:

In the Risk Summary Configurations table, we can set the weights for the 4 different risk components that will be used to calculate the overall Risk Score of the supply chain. The 4 components are: customers, facilities, suppliers, and network. In the screenshot below, customer and supplier risk are contributing 20% each to the overall risk score while facility and network risk are contributing 30% each to the overall risk score.

These 4 weights should add up to 1 (=100%). If they do not add up to 1, Cosmic Frog will still run and automatically scale the Risk Score up or down as needed. For example, if the weights add up to 0.9, the final Risk Score that is calculated based on these 4 risk categories and their weights will be divided by 0.9 to scale it up to 100%. In other words, the weight of each risk category is multiplied by 1/0.9 = 1.11 so that the weights then add up to 100% instead of 90%. If you do not want to use a certain risk category in the Risk Score calculation, you can set its weight to 0. Note that you cannot leave a weight field blank. These rules around automatically scaling weights up or down to add up to 1 and setting a weight to 0 if you do not want to use that specific risk component or subcomponent also apply to the other “… Risk Configurations” tables.
Following are 2 screenshots of the Facility Risk Configurations table on which the weights and risk bands to calculate the Risk Score for individual facility locations are specified. A subset of these same risk components is also used for customers (Customer Risk Configurations table) and suppliers (Supplier Risk Configurations table). We will not discuss those 2 tables in detail in this documentation since they work in the same way as described here for facilities.



The first band is from 0.0 (first Band Lower Value) to 0.2 (the next Band Lower Value), meaning between 0% and 20% of total network throughput at an individual facility. The risk score for 0% of total network throughput is 1.0 and goes up to 2.0 when total network throughput at the facility goes up to 20%. For facilities with a concentration (= % of total network throughput) between 0 and 20%, the Risk Score will be linearly interpolated from the lower risk score of 1.0 to the higher risk score of 2.0. For example, a facility that has 5% of total network throughput will have a concentration risk score of 1.25. The next band is for 20%-30% of total network throughput, with an associated risk between 2.0 and 3.5, etc. Finally, if all network throughput is at only 1 location (Band Lower Value = 1.0), the risk score for that facility is 10.0. The risk scores for any band run from 1, lowest risk, to 10, highest risk.
Following screenshot shows the Geographic Risk Configurations table with 2 of its risk subcomponents, biolab distance and economic:

As an example here in the red outline, the Biolab Distance Risk is specified by setting its weight to 0.05 or 5% and specifying which band definition on the Risk Band Definitions table should be used, which is “BioLab and Nuclear Distance Band Template”. The Definition of this band template is as follows when looked up in the Risk Band Definitions table:

This band definition says that if a location is within 0-10 miles to a Biolab of Safety Level 4, the Risk Score is 10.0. A distance of 10-20 miles has an associated Risk Score between 10 and 7.75, etc. If a location is 130 miles or farther from a biolab of safety Level 4, the Risk Score is 1.0.
The other 5 subcomponents of Geographic Risk are defined in a similar manner on this table: with a Risk Weight field and a Risk Band field that specifies which Band Definition on the Risk Band Definitions table is to be used for that risk subcomponent. The following table summarizes the names of the Band Definitions used for these geographic risk subcomponents and what the unit of measure is for the Band Values with an example:
Similar to the Geographic Risk component, the Utilization Risk component also has its own table, Utilization Risk Configurations, where its 3 risk subcomponents are configured. Again, each of the subcomponents has a Risk Weight field and a Risk Band field associated with it. The following table summarizes the names of the Band Definitions used for these utilization risk subcomponents and what the unit of measure is for the Band Values with an example:
Lastly on the Risk Inputs side, the Network Risk Configurations table specifies the components of Network Risk in a similar manner: with a Risk Weight and a Risk Band field for each risk component. The following table summarizes the names of the Band Definitions used for these network risk subcomponents and what the unit of measure is for the Band Values with an example:
Risk outputs can be found in some of the standard Output Summary Tables and in the risk specific Output Risk Tables:

The following screenshot shows the Optimization Risk Metrics Summary output table for a scenario called “Include Opt Risk Profile”. It shows both the OptiRisk and Risk Rating Template Optimization risk score outputs:

On the Optimization Customer Risk Metrics, Optimization Facility Risk Metrics, and Optimization Supplier Risk Metrics tables, the overall risk score for each customer, facility, and supplier can be found, respectively. They also show the risk scores of each risk component, e.g. for customers these components are Concentration Risk, Source Count Risk, and Geographic risk. The subcomponents for Geographic risk are further detailed in the Optimization Geographic Risk Metrics output table, where for each customer, facility, and supplier the overall geographic risk score and the risk scores of each of the geographic risk subcomponents are listed. Similarly, on the Facility Risk Metrics and Supplier Risk Metrics tables, the Utilization Risk score will be listed for each location, whilst the Optimization Utilization Risk Metrics table will detail the risk scores of the subcomponents of this risk (throughput utilization, storage utilization, and work center utilization).
Let’s walk through an example of how the risk score for the facility Plant_France_Paris_9904000 was calculated using a few screenshots of input and output tables. This first screenshot shows the Optimization Geographic Risk Metrics output table for this facility:

The geographic risk score of this facility is calculated as 4.0 and the values for all the geographic risk subcomponents are listed here too, for example 8.2 for biolab distance risk and 3.6 for political risk. The overall geographic risk of 4.0 was calculated using the risk score of each geographic risk subcomponent and the weights that are set on the Geographic Risk Configurations input table:

Geographic Risk Score = (biolab distance risk * biolab distance risk weight + economic risk * economic risk weight + natural disaster risk * natural disaster risk weight + nuclear distance risk * nuclear distance risk weight + political risk * political risk weight + epidemic risk * epidemic risk weight) / 0.8 = (8.2 * 0.05 + 5.3 * 0.2 + 2.8 * 0.3 + 2.3 * 0.05 + 3.6 * 0.1 + 4.5 * 0.1) / 0.8 = 4.0. (We need to divide by 0.8 since the weights do not add up to 1, but only to 0.8).
Next, in the Optimization Facility Risk Metrics output table we can see that the overall facility risk score is calculated as 6.8, which is the result of combining the concentration risk score, source count risk score, and geographic risk score using the weights set on the Facility Risk Configurations input table.

This screenshot shows the Facility Risk Configurations input table and the weights for the different risk components:

Facility Risk Score = (geographic risk * geographic risk weight + concentration risk * concentration risk weight + source count risk * source count risk weight) / 0.6 = 4.0 * 0.3 + 9.3 * 0.2 + 10.0 * 0.1) / 0.6 = 6.8. (We need to divide by 0.6 since the weights do not add up to 1, but only to 0.6).
A few things about Risk in Cosmic Frog that are good to keep in mind:
This documentation covers which geo providers one can use with Cosmic Frog, and how they can be used for geocoding and distance and time calculations.
Currently, there are 5 geo providers that can be used for geocoding locations in Cosmic Frog: MapBox, Bing, Google, PTV, and PC*Miler. MapBox is the default provider and comes free of cost with Cosmic Frog. To use any of the other 4 providers, you will need to obtain a license key from the company and add this to Cosmic Frog through your Account. The steps to do so are described in this help article “Using Alternate Geocoding Providers”.
Different geocoding providers may specialize in different geographies; refer to your provider for guidelines.
Geocoding a location (e.g. a customer, facility or supplier) means finding the latitude and longitude for it. Once a location is geocoded it can be shown on a map in the correct location which helps with visualizing the network itself and building a visual story using model inputs and outputs that are shown on maps.

To geocode a location:

For costs and capacities to be calculated correctly, it may be needed to add transport distances and transport times to Cosmic Frog models. There are defaults that will be used if nothing is entered into the model, or users can populate these fields, either themselves or by using a Distance Lookup Utility. Here the tables where distances and times can be entered, what happens if nothing has been entered, and how users can utilize the Distance Lookup Utility will be explained.
There are multiple Cosmic Frog input tables that have input fields related to Transport Distance, and Transport Time, including Speed which can also be used to calculate transport time from a Transport Distance (time = distance / speed). These all have their own accompanying UOM (unit of measure) field. Here is an overview of the tables which contain Distance, Time and/or Speed fields:
For Optimization (Neo), this is the order of precedence that is applied when multiple tables and fields are used:

For Transportation (Hopper) models, this is the order of precedence when multiple tables and fields are being used:
To populate these input tables and their pertinent fields, user has following options:
Cosmic Frog users can find multiple handy utilities in the Utilities section of Cosmic Frog - here we will cover the Distance Lookup utility. This utility looks up transportation distances and times for origin-destination pairs and populates the Transit Matrix table. As Geo Providers, Bing, PC Miler and Azure can be used if the user has a license key for these. In addition, there is a new free PC Miler-UltraFast option which can look up accurate road distances within Europe and North America without needing a license key. This is also a very fast way to lookup distances. Lastly, the Great Circle Geo Provider option calculates the straight-line distance for origin-destination pairs based on latitudes & longitudes. We will look at the configuration options of the utility using the next 2 screenshots:


Note that when using the Great Circle geo provider for Distance calculations, only the Transport Distance field in the Transit Matrix table will be populated. The Transport Time will be calculated at run time using the Average Speed on the Model Settings table.
To finish up, we will walk through an example of using the Distance Lookup utility on a simple model with 3 customers (CZs) and 2 distribution centers (DCs), which are shown in the following 2 screenshots of the Customers and Facilities tables:


We can use Groups and/or Named Table Filters in the Transportation Policies table if we want to make 1 policy that represents all possible lanes from the DCs to the customers:

Next, we run the Distance Lookup utility with following settings:
This results in the following 6 records being added to the Transit Matrix table - 1 for each possible DC-CZ origin-destination pair:

Cosmic Frog for Excel Applications provide alternative interfaces for specific use cases as companion applications to the full Cosmic Frog Supply chain design product. For example, they can be used to access a subset of the Cosmic Frog functionality in a simplified manner or provide specific users who are not experienced in working with Cosmic Frog models access to a subset of inputs and/or outputs of a full-blown Cosmic Frog model that are relevant to their position.
Several example use cases are:
It is recommended to review the Cosmic Frog for Excel App Builder before diving into this documentation, as basic applications can quickly and easily be built with it rather than having to edit/write code, which is what will be explained in this help article. The Cosmic Frog for Excel App Builder can be found in the Resource Library and is also explained in the “Getting Started with the Cosmic Frog for Excel App Builder” help article.
Here we will discuss how one can set up and use a Cosmic Frog for Excel Application, which will include steps that use VBA (Visual Basic for Applications) in Excel and scripting using the programming language Python. This may sound daunting at first if you have little or no experience using these. However, by following along with this resource and the ones referenced in this document, most users will be able to set up their own App in about a day or 2 by copy-pasting from these resources and updating the parts that are specific to their use case. Generative AI engines like Chat GPT and perplexity can be very helpful as well to get a start on VBA and Python code. Cosmic Frog functionality will not be explained much in this documentation, the assumption is that users are familiar with the basics of building, running, and analyzing outputs of Cosmic Frog models.
In this documentation we are mainly following along with the Greenfield App that is part of the Resource Library resource “Building a Cosmic Frog for Excel Application”. Once we have gone through this Greenfield app in detail, we will discuss how other common functionality that the Greenfield App does not use can be added to your own Apps.
There are several Cosmic Frog for Excel Applications that have been developed by Optilogic available in the Resource Library. Links to these and a short description of each of them can be found in the penultimate section “Apps Available in the Resource Library” of this documentation.
Throughout the documentation links to other resources are included; in the last section “List of All Resources” a complete list of all resources mentioned is provided.
The following screenshot shows at a high-level what happens when a typical Cosmic Frog for Excel App is used. The left side represents what happens in Excel, and on the right side what happens on the Optilogic platform.

A typical Cosmic Frog for Excel Application will contain at least several worksheets that each serve a specific purpose. As mentioned before, we are using the MicroAPP_Greenfield_v3.xlsm App from the Building a Cosmic Frog for Excel Application resource as an example. The screenshots in this section are of this .xlsm file. Depending on the purpose of the App, users will name and organize worksheets differently, and add/remove worksheets as needed too:




To set up and configure Cosmic Frog for Excel Applications, we mostly use .xlsm Excel files, which are macro-enabled Excel workbooks. When opening an .xlsm file that for example has been shared with you by someone else or has been downloaded from the Optilogic Resource Library (Help Article on How To Use the Resource Library), you may find that you see either a message about a Protected View where editing needs to be enabled or a Security Warning that Macros have been disabled. Please see the Troubleshooting section towards the end of this documentation on how to resolve these warnings.
To set up Macros using Visual Basic for Applications (VBA), go to the Developer tab of the Excel ribbon:

If the Developer option is not available in the ribbon, then go to File > Options > Customize Ribbon, select Developer from the list on the left and click on the Add >> button, then click on OK. Should you not see Options when clicking on File, then click on “More…” instead, which will then show you Options too.
Now that you are set up to start building Macros using VBA: go to the Developer tab, enable Design Mode and add controls to your sheets by clicking on Insert, and selecting any controls to insert from the drop-down menu. For example, add a button and assign a Macro to it by right clicking on the button and selecting Assign Macro from the right-click menu:



To learn more about Visual Basic for Applications, see this Microsoft help article Getting started with VBA in Office, it also has an entire section on VBA in Excel.
It is possible to add custom modules to VBA in which Sub procedures (“Subs”) and functions to perform specific tasks have been pre-defined and can be called in the rest of the VBA code used in the workbook where the module has been imported into. Optilogic has created such a module, called Optilogic.bas. This module provides 8 standard functions for integration into the Optilogic platform.
You can download Optilogic.bas from the Building a Cosmic Frog for Excel Application resource in the Resource Library:

You can then import it into the workbook you want to use it in:

Right click on Modules in the VBA Project of the workbook you are working in and then select Import File…. Browse to where you have saved Opitlogic.bas and select it. Once done, it will appear in the Modules section, and you can double click on it to open it up:


These Optilogic specific Sub procedures and the standard VBA for Excel functionality enable users to create the Macros they require for their Cosmic Frog for Excel Applications.
App Keys are used to authenticate the user from the Excel App on the Optilogic platform. To get an App Key that you can enter into your Excel Apps, see this Help Center Article on Generating App and API Keys. During the first run of an App, the App Key will be copied from the cell it is entered into to an app.key file in the same folder as the Excel .xlsm file, and it will be removed from the worksheet. This is done by using the Manage_App_Key Sub procedure described in the “Optilogic.bas VBA Module” section above. User can then keep running the App without having to enter the App Key again unless the workbook or app.key file is moved elsewhere.
It is important to emphasize that App Keys should not be saved into Excel Apps as they can easily be accidentally shared when the Excel App itself is shared. Individual users need to authenticate with their own App Key.
When sharing an App with someone else, one easy way to do so is to share all contents of the folder where the Excel App is saved (optionally, zipped up). However, one needs to make sure to remove the app.key file from this folder before doing so.
A Python Job file in the context of Cosmic Frog for Excel Applications is the file that contains the instructions (in Python script format) for the operations of the App that take place on the Optilogic Platform.
Notes on Job files:
For Cosmic Frog for Excel Apps, a .job file is typically created and saved in the same folder as the Macro-enabled Excel workbook. As part of the Run Macro in that Excel workbook, the .job file will be uploaded to the Optilogic platform too (together with any input & settings data). Once uploaded, the Python code in the .job file will be executed, which may do things like loading the data from any uploaded CSV files into a Cosmic Frog model, run that Cosmic Frog model (a Greenfield run in our example), and retrieve certain outputs of interest from the Cosmic Frog model once the run is done.
For a Python job that uses functionality from the cosmicfrog library to run, a requirements.txt file that just contains the text “cosmicfrog” (without the quotes) needs to be placed in the same folder as the .job file. Therefore, this file is typically created by the Excel Macro and uploaded together with any exported data & settings worksheets, the app.key file, and the .job file itself so they all land in the same working folder on the Optilogic platform. Note that the Optilogic platform will soon be updated so that using a requirements.txt file will not be needed anymore and the cosmicfrog library will be available by default.
Like VBA, users and creators of Cosmic Frog for Excel Apps do not need to be experts in Python code, and will mostly be able to do the things they want to by copy-pasting from existing Apps and updating only the parts that are different for their App. In the greenfield.job section further below we will go through the code of the python Job for the Greenfield App in more detail, which can be a starting point for users to start making changes to for their own Apps. Next, we will provide some more details and references to quickly equip you with some basic knowledge, including what you can do with the cosmicfrog Python library.
There are a lot of helpful resources and communities online where users can learn everything there is to know about using & writing Python code. A great place to start is on the Python for Beginners page on python.org. This page also mentions how more experienced coders can get started with Python.
Working locally on any Python scripts/Jobs has the advantage that you can make use of code completion features which helps with things like auto-completion, showing what arguments functions need, catch incorrect syntax/names, etc. An example set up to achieve this is for example one where Python, Visual Studio Code, and an IntelliSense extension package for Python for Visual Studio Code are installed locally:
Once you are set up locally and are starting to work with Python files in Visual Studio Code, you will need to install the pandas and cosmicfrog libraries to have access to their functionality. You do this by typing following in a terminal in Visual Studio Code:
More experienced users may start using additional Python libraries in their scripts and will need to similarly install them when working locally to have access to their functionality.
If you want to access items on the Optilogic platform (like Cosmic Frog models) while working locally, you will likely need to whitelist your IP address on the platform, so the connections are not blocked by a firewall. You can do this yourself on the Optilogic platform:

A great resource on how to write Python scripts for Cosmic Frog models is this “Scripting with Cosmic Frog” video. In this video, the cosmicfrog Python library, which adds specific functionality to the existing Python features to work with Cosmic Frog models, is covered in some detail already. The next set of screenshots will show an example using a Python script named testing123.py on our local set-up. The first screenshot shows a list of functions available from the cosmicfrog Python library:

When you continue typing after you have typed “model.” the code completion feature will auto-generate a list of functions you may be getting at. In the next screenshot ones that start with or contain a “g” as I have only typed a “g” so far. This list will auto-update the more you type. You can select from the list with your cursor or arrow up/down keys and hitting the Tab key to auto-complete:

When you have completed typing the function name and next type a parenthesis ‘(‘ to start entering arguments, a pop-up will come up which contains information about the function and its arguments:

As you type the arguments for the function, the argument that you are on and the expected format (e.g. bool for a Boolean, str for string, etc.) will be in blue font and a description of this specific argument appears above the function description (e.g. above box 1 in the above screenshot). In the screenshot above we are on the first argument input_only which requires a Boolean as input and will be set to False by default if the argument is not specified. In the screenshot below we are on the fourth argument (original_names) which is now in blue font; its default is also False, and the argument description above the function description has changed now to reflect the fourth argument:

The next screenshot shows 2 examples of using the get_tablelist function of the FrogModel module:

As mentioned above, you can also use Atlas on the Optilogic platform to create and run Python scripts. One drawback here is that it currently does not have code completion features like IntelliSense in Visual Studio Code.
The following simple test.py Python script on Atlas will print the first Hopper output table name and its column names:


After running the Greenfield App, we can see the following files together in the same folder on our local machine:

On the Optilogic platform, a working folder is created by the Run Greenfield Macro. This folder is called “z Working Folder for Excel Greenfield App”. After running the Greenfield App, we can see following files in here:

Parts of the Excel Macro and Python .job file will be different from App to App based on the App’s purpose, but a lot of the content will be the same or similar. In this section we will step through the Macro that is behind the Run Greenfield button in the Cosmic Frog for Excel Greenfield App that is included in the “Building a Cosmic Frog for Excel Application” resource, where it will be explained what is happening at a high level each step of the way and mention if this part is likely to be different and in need of editing for other Apps or if it would typically stay the same across most Apps. After stepping through this Excel Macro in this section, we will the same for the Greenfield.job file in the next section.
The next screenshot shows the first part of the VBA code of the Run Greenfield Macro:

Note that throughout the Macro you will see text in green font. These are comments to describe what the code is doing and are not code that is executed when running the Macro. You can add comments by simply starting the line with a single quote and then typing your comment. Comments can be very helpful for less experienced users to understand what the VBA code is doing.
Next, the file path to the workbook is retrieved:

This piece of code uses the Get_Workbook_File_Path function of the Optilogic.bas VBA module to get the file path of the current workbook. This function first tries to get the path without user input. If it finds that the path looks like the Excel workbook is stored online in for example a Cloud folder, it will use user input in cell B3 on the Admin worksheet to get the file path instead. Note that specifying the file path is not necessary if the App runs fine without it, which means it could get the path without the user input. Only if user gets the message “Local file path to this Excel workbook is invalid. It is possible the Excel workbook is in a cloud drive, or you have provided an invalid local path. Please review setup step 4 on Admin sheet.”, the local file path should be entered into cell B3 on the Admin worksheet.
This code can be left as is for other Apps if there is an Admin worksheet (the variable pathsheetName indicated with 1 in screenshot above) where in cell B3 the file path (the variable pathCell indicated with 2 in screenshot above) can be specified. Of course, the worksheet name and cell can be updated if these are located elsewhere in the App. The message the user gets in this case (set as pathusrMsg indicated with 3 in the screenshot above) may need to be edited accordingly too.
The following code takes care of the App Key management:

The Manage_App_Key function from the Optilogic.bas VBA module is used here to retrieve the App Key from cell B2 on the Admin worksheet and put it into a file named app.key which is saved in the same location as the workbook when the App is run for the first time. The key is then removed from cell B2 and replaced with the text “app key has been saved; you can keep running the App”. As long as the app.key file and the workbook are kept together in the same location, the App will keep working.
Like the previous code on getting the local file path of the workbook, this code can be left as is for other Apps. Only if the location of where the App Key needs to be entered before the first run is different from cell B2 on the worksheet named Admin, the keysheetName and keyCell variables (indicated with 1 and 2 in the screenshot above) need to be updated accordingly.
This App has a greenfield.job file associated with it that contains the Python script which will be run on the Optilogic platform when the App is run. The next piece of code checks that this greenfield.job file is saved in the same location as the Excel App, and it also sets the name of the folder to be created on the Optilogic platform where files will get uploaded to:

This code can be left as is for other Cosmic Frog for Excel Apps, except following will likely need updating:
The Greenfield settings are set in the next step. The ones the user can set on the Settings worksheet are taken from there and others are set to a default value:

Next, the Greenfield Settings and the other input data are written into .csv files:


The firstSpaceIndex variable is set to the location of the first space in the resource size string.
Looking in the Greenfield App on the Customers worksheet we see that this means that the Customer Name (column A), Latitude (column B), Longitude (column C), and Quantity (column D) columns will be exported. The Customers.csv file will contain the column names on the first row, plus 96 rows with data as the last populated row is row 97. Here follows a screenshot showing the Customers worksheet in the Excel App (rows 6-93 hidden) and the first 11 lines in the Customers.csv file that was exported while running the Greenfield App:

Other Cosmic Frog for Excel Applications will often contain data to be exported and uploaded to the Optilogic platform to refresh model data; the Export_CSV_File function can be used in the same way to export similar and other tabular data.
As mentioned in the “Python Job File and requirements.txt” section earlier, a requirements.txt file placed in the same folder as the .job file that contains the Python script is needed so the Python script can run using functionality from the cosmicfrog Python library. The next code snippet checks if this file already exists in the same location as the Excel App, and if not creates it there, plus writes the text cosmicfrog into it.

This code can be used as is by other Excel Apps.
The next step is to upload all the files needed to the Optilogic platform:

Besides updating the local/platform file names and paths as appropriate, the Upload_File_To_Optilogic Sub procedure will be used by most if not all Excel Apps: even if the App is only looking at outputs from model runs and not modifying any input data or settings, the function is still required to upload the .job, app.key, and requirements.txt files.
The next bit of code uses 2 more of the Optilogic.bas VBA module functions to run and monitor the Python job on the Optilogic platform:

This piece of code can stay as is for most Apps, just make sure to update the following if needed:
The last piece of code before some error handling downloads the results (2 .csv files) from the Optilogic platform using the Download_File_From_Optilogic function from the Optilogic.bas VBA module:

This piece of code can be used as is with the appropriate updates for worksheet names, cell references, file names, path names, and text of status updates and user messages. Depending on the number of files to be downloaded, the part of the code setting the names of the output files and doing the actual download (bullet 2 above) can be copy-pasted and updated as needed.
The last piece of VBA code of the Macro shown in the screenshot below has some error handling. Specifically, when the Macro tries to retrieve the local path of the Macro-enabled .xlsm workbook and it finds it looks like it is online, an error will pop up and the user will be requested to put the file path name in cell B3 on the Admin worksheet. If the Macro hits any other errors, a message saying “An unexpected error occurred: <error number> <error description>” will pop up. This piece of code can be left as is for other Cosmic Frog for Excel Applications.

We have used version 3 of the Greenfield App which is part of the Building a Cosmic Frog for Excel Application resource in the above. There is also a stand-alone newer version (v6) of the Cosmic Frog for Excel – Greenfield application available in the Resource Library. In addition to all of the above, this App also:
This functionality is likely helpful for a lot of other Cosmic Frog for Excel Apps and will be discussed in section “Additional Common App Functionality” further below. We especially recommend using the functionality to prevent Excel from locking up in all your Apps.
Now we will go through the greenfield.job file that contains the Python script to be run on the Optilogic platform in detail.

This first piece of code takes care of importing several python libraries and modules (optilogic, pandas, time; lines 1, 2, and 5). There is another library, cosmicfrog, that is imported through the requirements.txt file that has been discussed before in the section titled “Python Job File and requirements.txt”. Modules from these libraries are imported here as well (FrogModel from cosmicfrog on line 3 and pioneer.API from optilogic on line 4). Now the functionality of these libraries and their modules can be used throughout the code of the script that follows. The optilogic and cosmicfrog libraries are developed by Optilogic and contain specific functionality to work with Cosmic Frog models (e.g. the functions discussed in the section titled “Working with Python Locally” above and on the Optilogic platform.
For reference:
This first piece of code can be left as is in the script files (.job files locally, .py files on the Optilogic platform) for most Cosmic Frog for Excel Applications. More advanced users may import different libraries and modules to use functionality beyond what the standard Python functionality plus the optilogic, cosmicfrog, pandas, and time libraries & modules together offer.
Next, a check_job_status function is defined that will keep checking a job until it is completed. This will be used when running a job to know if the job is done and ready to move onto the next step, which will often be downloading the results of the run. This piece of code can be kept as is for other Cosmic Frog for Excel Applications.

The following screenshot shows the next snippet of code that defines a function called wait_for_jobs_to_complete. It uses the check_job_status to periodically check if the job is done, and once done, moves onto the next piece of code. Again, this can be kept as is for other Apps.

Now it is time to create and/or connect to the Cosmic Frog model we want to use in our App:

Note that like the VBA code in the Excel Macro, we can add comments describing what the code is doing to our Python script too. In Python, comments need to start with the number (/hash) sign # and the font of comments automatically becomes green in the editor that is being used here (Visual Studio Code using the default Dark Modern color theme).
After clearing the tables, we will now populate them with the date from the Excel workbook. First, the uploaded Customers.csv file that contain the columns Customer Name, Latitude, Longitude, and Quantity is used to update both the Customers and the CustomerDemand tables:

It is very dependent on the App that you are building how much of the above code you can use as is, but the concepts of reading csv files, renaming, and dropping columns as needed and writing tables into the Cosmic Frog model will be frequently used. The following piece of code also writes the Facilities and Suppliers data into the Cosmic Frog tables. Again, the concepts used here will be useful for other Apps too, it may just not be exactly the same depending on the App and the tables that are being written to:

Next up, the Settings.csv file is used to populate the Greenfield Settings table in Cosmic Frog and to set 2 variables for resource size and scenario name:

Now that the Greenfield App Cosmic Frog model is populated with all the data needed, it is time to kick off the model and run a Greenfield analysis:

Besides updating any tags as desired (bullet 2b above), this code can be kept exactly as is for other Excel Apps.
Lastly, once the model is done running, the results are retrieved from the model and written into .csv files, which will then be downloaded by the Excel Macro:

When the greenfield_job.py file starts running on the Optilogic platform, we can monitor and see the progress of the job in the Run Manager App:

The Greenfield App (version 3) that is part of the Building a Cosmic Frog for Excel Application resource covers a lot of common features users will want to use in their own Apps. In this section we will discuss some additional functionality users may also wish to add to their own Apps. This includes:
A newer version of the Greenfield App (version 6) can be found here in the Resource Library. This App has all the functionality version 3 has, plus: 1) it has an updated look with some worksheets renamed and some items moved around, 2) has the option to cancel a Run after it has been kicked off and has not completed yet, 3) it prevents locking up of Excel while the App is running, 4) reads a few CSV output files back into worksheets in the same workbook, and 5) uses a Python library called folium to create Maps that a user can open from the Excel workbook, which will then open the map in the user’s default browser. Please download this newer Greenfield App if you want to follow along with the screenshots in this section. First, we will cover how a user can prevent locking of Excel during a run and how to add a cancel button which can stop a run that has not yet completed.
The screenshots call out what is different as compared to version 3 of the App discussed above. VBA code that is the same is not covered here. The first screenshot is of the beginning of the RunGreenfield_Click Macro that runs when the user hits the Run Greenfield button in the App:

The next screenshot shows the addition of code to enable the Cancel button once the Job has been uploaded to the Optilogic platform:

If everything completes successfully, a user message pops up, and the same 3 lines of code are added here too to enable the Run Greenfield buttons, disable the Cancel button, and keep other applications accessible:

Finally, a new Sub procedure CancelRun is added that is assigned to the Cancel button and will be executed when the Cancel button is clicked on:

This code gets the Job Key (unique identifier of the Job) from cell C9 on the Start worksheet and then uses a new function added to the Optilogic.bas VBA module that is named Cancel_Job_On_Optilogic. This function takes 2 arguments: the Job Key to identify the run that needs to be cancelled and the App Key to authenticate the user on the Optilogic platform.
Version 6 of the Greenfield App reads results from the Facility Summary, Customer Summary, and Flow Summary back into 3 worksheets in the workbook. A new Sub procedure named ImportCSVDataToExistingSheet (which can be found at the bottom of the RunGreenfield Macro code) is used to do this:

The function is used 3 times: to import 1 csv file into 1 worksheet at a time. The function takes 3 arguments:
We will discuss a few possible options on how to visualize your supply chain and model outputs on maps when using/building Cosmic Frog for Excel Applications.
This table summarizes 3 of the mapping options: their pros, cons, and example use cases:
There is standard functionality in Excel to create 3D Maps. You can find this on the Insert tab, in the Tours groups (next to Charts):

Documentation on how to get started with 3D Maps in Excel can be found here. Should your 3D Maps icon be greyed out in your Excel workbook, then this thread on the Microsoft Community forum may help troubleshoot this.
How to create an Excel 3D Map in a nutshell:
With Excel 3D Maps you can visualize locations on the map and for example base their size on characteristics like demand quantity. You can also create heat maps and show how location data changes over time. Flow maps that show lines between source and destination locations cannot be created with Excel 3D Maps. Refer to the Microsoft documentation to get a deeper understanding of what is possible with Excel 3D Maps.
The Cosmic Frog for Excel – Geocoding App in the Resource Library uses Excel 3D Maps to visualize customer locations that the App has geocoded on a map:

Here, the geocoded customers are shown as purple circles which are sized based on their total demand.
A good option to for example visualize Hopper (= transportation optimization) routes on a map is the ArcGIS Excel Add-in. If you do not have the add-in, you can get it from within Excel as follows:

You may be asked to log into your Microsoft account when adding this in and/or when starting to use the Add-in. Should you experience any issues while trying to get the Add-in added to Excel, we recommend closing all Office applications and then only open one Excel workbook through which you add the Add-in.
To start using the add-in and create ArcGIS maps in Excel:

Excel will automatically select all data in the worksheet that you are on. You can ensure the mapping of the data is correct or otherwise edit it:

After adding a layer, you can further configure it through the other icons at the top of the Layers window:

The other configuration options for the Map are found on the left-hand side of the Map configuration pane:

As an example, consider the following map showing the stops on routes created by the Hopper engine (Cosmic Frog’s transportation optimization technology). The data in this worksheet is from the Transportation Stop Summary Hopper output table:

As a next step we can add another layer to the map based on the Transportation Segment Summary Hopper output table to connect the source-destination pairs with each other using flow lines. For this we need to use the Esri JSON Geometry Location types mentioned earlier. An example Excel file containing the format needed for drawing polylines can be found in the last answer of this thread on the Esri community website: https://community.esri.com/t5/arcgis-for-office-questions/json-formatting-in-arcgis-for-excel/td-p/1130208, on the PolylinesExample1 worksheet. From this Excel file we can see that the format needed to draw a line connecting 2 locations:
{“paths”: [[<point1_longitude>,<point1latitude>],[<point2_longitude>,<point2_latitude>]],”spatialReference”: {“wkid”: 4326}}
Where wkid indicates the well-known ID of the spatial reference to be used on the map (see above for a brief explanation and a link to a more elaborate explanation of spatial references). Here it is set to 4326, which is WGS 1984.
The next 2 screenshots show the data from a Segments Summary and an added layer to the map to show the lines from the stops on the route:


Note that for Hopper outputs with multiple routes, we now need to filter both the worksheet with the Stops information and the worksheet with the Segments information for the same route(s) to synchronize them. A better solution is to bring the stopID and Delivered Quantity information from the Stops output into the Segments output, so we only have 1 worksheet with all the information needed and both layers are generated from the same data. Then filtering this set of data will update both layers simultaneously.
Here, we will discuss a Python library called folium, which gives users the ability to create maps that can show flows, tooltips, and has options to customize/auto-size location shapes and flow lines. We will use the example of the Cosmic Frog for Excel – Greenfield App (version 6) again where maps are created as .html files as part of the greenfield_job.py Python script that runs on the Optilogic platform. They are then downloaded as part of the results and from within Excel, users can click on buttons to show flows or customers which then opens the .html files in user’s default browser. We will focus on the differences with version 3 of the Greenfield App that are related to maps and folium. We will discuss the changes/addition to both the VBA code in the Excel Run Greenfield Macro and the additions to greenfield_job.py. First up in the VBA code, we need to add folium to the requirements.txt file so that the Python script can make use of the library once it is uploaded to the Optilogic platform:

To do so, a line to the VBA code is added to write “folium” into requirements.txt.
As part of downloading all the results from the Optilogic platform after the Greenfield run has completed, we need to add downloading the .html map files that were created:

In this version of the Greenfield app, there is a new Results Summary worksheet that has 3 buttons at the top:

Each of these buttons has a Sub procedure assigned to it, let’s look at the one for the “Show Flows with Customers” button:

The map that is opened will look something like this, where a tooltip comes up when hovering over a flow line. (How to create and configure the map using folium will be discussed next.)

The additions to the greenfield.job file to make use of folium and creating the 3 maps will now be covered:

First, at the beginning of the script, we need to add “import folium” (line 6), so that the library’s functionality can be used throughout the script. Next, the 3 Greenfield output tables that are used to create the 3 maps are read in, and a few data type changes are made to get the data ready for mapping:

This is repeated twice, once for the Optimization Greenfield Customer Summary output table and once for the Optimization Greenfield Flow Summary output table.
The next screenshot shows the code where the map to show Facilities is created and the Markers of them are configured based on if the facility is an Existing Facility or a Greenfield Facility:

In the next bit of code, df_Res_Flows dataframe is used to draw lines on the map between origin and destination locations:

Lastly, the customers from the Optimization Greenfield Customer Summary output table are added to the map that already contains facilities and flow lines, and is saved as greenfield_flows_customers_map.html:

Here are some additional pointers that may be useful when building your own Cosmic Frog for Excel applications:


You may run into issues where Macros or scripts are not running as expected. Here we cover some common problems you may come across and their solutions.
When opening an Excel .xslm file you may find that you see following message about the view being protected, you can click on Enable Editing if you trust the source:

Enabling content is not necessarily sufficient to also be able to run any Macros contained in the .xlsm file, and you may see following message after clicking on the Enable Editing button:

Closing this message box and then trying to run a Macro will result in the following message.

To resolve this, it is not always sufficient to just close and reopen the workbook and enable macros as the message suggests. Rather, go to the folder where the .xlsm file is saved in File Explorer, right click on it, and select Properties:

At the bottom in the General tab, check the Unblock checkbox and then click on OK.

Now, when you open the .xlsm file again, you have the option to Enable Macros, do so by clicking on the button. From now on, you will not need to repeat any of these steps when closing and reopening the .xlsm file; Macros will work fine.

It is also possible that instead of the Enable Editing warning and warnings around Macros not running discussed above, you will see a message that Macros have been disabled, as in the following screenshot. In this case, please click on the Enable Content button:

Depending on your anti-virus software and its settings, it is possible that the Macros in your Cosmic Frog for Excel Apps will not run as they are blocked by the anti-virus software. If you get “An unexpected error occurred: 13 Type mismatch”, this may be indicative of the anti-virus software blocking the Macro. Work with your IT department to allow the running of Macros.
If you are running Python scripts locally (say from Visual Studio Code) that are connecting to Cosmic Frog models and/or uploading files to the Optilogic platform, you may be unsuccessful and get warnings with the text “WARNING – create_engine_with_retry: Database not ready, retrying”. In this case, the likely cause is that your IP address needs to be added to the list of firewall exceptions within the Optilogic platform, see the instructions on how to do this in the “Working with Python Locally” section further above.
You will find that if you export cells that contain formulas from Excel to CSV that these are exported as 0’s and not as the calculated value. Possible solutions for this are 1) to export to a format other than CSV, possibly .xslx, or 2) to create an extra column in your data where the results of the cells with formulas are copy-pasted as values into and export this column instead of the one with the formulas (this way the formulas stay intact for a next run of the App). You could use the record Macro option to get a start on the VBA code for copy-pasting values from a certain column into a certain column so that you do not have to manually do this each time you run the App, but it becomes part of the Macro that runs when the App runs. An example of VBA code that copy-pastes values can be seen in this screenshot:

When running an App that has been run previously, there are likely output files in the folder where the App is located, for example CSV files that are opened by the user to view the results or are read back into a worksheet in the App. When running the App again, it is important to not have these output files open, otherwise an error will be thrown when the App gets to the stage of downloading the output files since open files cannot be overwritten.
There are currently several Cosmic Frog for Excel Applications available in the Resource Library, with more being added over time. Check back frequently and search for “Cosmic Frog for Excel” in the search bar to find all available Apps. A short description for each App that is available follows here:
As this documentation contains many links to references and resources, we will list them all here in one place: