Hopper is the Transportation Optimization algorithm within Cosmic Frog. It designs optimal multi-stop routes to deliver/pickup a given set of shipments to/from customer locations at the lowest cost. Fleet sizing and balancing weekly demand can be achieved with Hopper too. Example business questions Hopper can answer are:
Hopper’s transportation optimization capabilities can be used in combination with network design to test out what a new network design means in terms of the last-mile delivery configuration. For example, questions that can be looked at are:
With ever increasing transportation costs, getting the last-mile delivery part of your supply chain right can make a big impact on the overall supply chain costs!
It is recommended to watch this short Getting Started with Hopper video before diving into the details of this documentation. The video gives a nice, concise overview of the basic inputs, process, and outputs of a Hopper model.
In this documentation we will first cover some general Cosmic Frog functionality that is used extensively in Hopper, next we go through how to build a Hopper model which discusses required and optional inputs, how to run a Hopper model is explained, Hopper outputs in tables, on maps and analytics are covered as well, and finally references to a few additional Hopper resources are listed. Note that additional more advanced Hopper features are covered in separate articles:
To not make this document too repetitive we will cover some general Cosmic Frog functionality here that applies to all Cosmic Frog technologies and is used extensively for Hopper too.
To only show tables and fields in them that can be used by the Hopper transportation optimization algorithm, disable all icons except the 4th (“Transportation”) in the Technologies Selector from the toolbar at the top in Cosmic Frog. This will hide any tables and fields that are not used by Hopper and therefore simplifies the user interface:

Many Hopper related fields in the input and output tables will be discussed in this document. Keep in mind however that a lot of this information can also be found in the tooltips that are shown when you hover over the column name in a table, see following screenshot for an example. The column name, technology/technologies that use this field, a description of how this field is used by those algorithm(s), its default value, and whether it is part of the table’s primary key are listed in the tooltip.

There are a lot of fields with names that end in “…UOM” throughout the input tables. How they work will be explained here so that individual UOM fields across the tables do not need to be explained further in this documentation as they all work similarly. These UOM fields are unit of measure fields and often appear to the immediate right of the field that they apply to, like for example Distance Cost and Distance Cost UOM in the screenshot above. In these UOM fields you can type the Symbol of a unit of measure that is of the required Type from the ones specified in the Units Of Measure table. For example, in the screenshot above, the unit of measure Type for the Distance Cost UOM field is Distance. Looking in the Units of Measure table, we see there are multiple of these specified, like for example Mile (Symbol = MI), Yard (Symbol = YD) and Kilometer (Symbol = KM), so we can use any of these in this UOM field. If we leave a UOM field blank, then the Primary UOM for that UOM Type specified in the Model Settings table will be used. For example, for the Distance Cost UOM field in the screenshot above the tooltip says Default Value = {Primary Distance UOM}. Looking this up in the Model Settings table shows us that this is set to MI (= mile) in our current model. Let’s illustrate this with the following screenshots of 1) the tooltip for the Distance Cost UOM field (located on the Transportation Assets table), 2) units of measure of Type = Distance in the Units Of Measure table and 3) checking what the Primary Distance UOM is set to in the Model Settings table, respectively:



Note that only hours (Symbol = HR) is currently allowed as the Primary Time UOM in the Model Settings table. This means that if another Time UOM, like for example minutes (MIN) or days (DAY), is to be used, the individual UOM fields need to be used to set these. Leaving them blank would mean HR is used by default.
With few exceptions, all tables in Cosmic Frog contain both a Status field and a Notes field. These are often used extensively to add elements to a model that are not currently part of the supply chain (commonly referred to as the “Baseline”), but are to be included in scenarios in case they will definitely become part of the future supply chain or to see whether there are benefits to optionally include these going forward. In these cases, the Status in the input table is set to Exclude and the Notes field often contains a description along the lines of ‘New Market’, ‘New Product’, ‘Box truck for Scenarios 2-4’, ‘Depot for scenario 5’, ‘Include S6’, etc. When creating scenario items for setting up scenarios, the table can then be filtered for Notes = ‘New Market’ while setting Status = ‘Include’ for those filtered records. We will not call out these Status and Notes fields in each individual input table in the remainder of this document, but we definitely do encourage users to use these extensively as they make creating scenarios very easy. When exploring any Cosmic Frog models in the Resource Library, you will notice the extensive use of these fields too. The following 2 screenshots illustrate the use of the Status and Notes fields for scenario creation: 1) shows several customers on the Customers table where CZ_Secondary_1 and CZ_Secondary_2 are not currently customers that are being served but we want to explore what it takes to serve them in future. Their Status is set to Exclude and the Notes field contains ‘New Market’; 2) a scenario item called ‘Include New Market’ shows that the Status of Customers where Notes = ‘New Market’ is changed to ‘Include’.


The Status and Notes fields are also often used for the opposite where existing elements of the current supply chain are excluded in scenarios in cases where for example locations, products or assets are going to go offline in the future. To learn more about scenario creation, please see this short Scenarios Overview video, this Scenario Creation and Maps and Analytics training session video, this Creating Scenarios in Cosmic Frog help article, and this Writing Scenario Syntax help article.
A subset of Cosmic Frog’s input tables needs to be populated in order to run Transportation Optimization, whereas several other tables can be used optionally based on the type of network that is being modelled, and the questions the model needs to answer. The required tables are indicated with a green check mark in the screenshot below, whereas the optional tables have an orange circle in front of them. The Units Of Measure and Model Settings tables are general Cosmic Frog tables, not only used by Hopper and will always be populated with default settings already; these can be added to and changed as needed.

We will first discuss the tables that are required to be populated to set up a basic Hopper model and then cover what can be achieved by also using the optional tables and fields. Note that the screenshots of all input and output tables mostly contain the fields in the order they appear in in the Cosmic Frog user interface, however on occasion the order of the fields was rearranged manually. So, if you do not see a specific field in the same location as in a screenshot, then please scroll through the table to find it.
The Customers table contains what for purposes of modelling are considered the customers: the locations that we need to deliver a certain amount of certain product(s) to or pick a certain amount of product(s) up from. The customers need to have their latitudes and longitudes specified so that distances and transport times of route segments can be calculated, and routes can be visualized on a map. Alternatively, users can enter location information like address, city, state, postal code, country and use Cosmic Frog’s built in geocoding tool to populate the latitude and longitude fields. If the customer’s business hours are important to take into account in the Hopper run, its operating schedule can be specified here too, along with customer specific variable and fixed pickup & delivery times. Following screenshot shows an example of several populated records in the Customers table:

The pickup & delivery time input fields can be seen when scrolling right in the Customers table (the accompanying UOM fields are omitted in this screenshot):

Finally, scrolling even more right, there are 3 additional Hopper-specific fields in the Customers table:

The Facilities table needs to be populated with the location(s) the transportation routes start from and end at; they are the domicile locations for vehicles (assets). The table is otherwise identical to the customers table, where location information can again be used by the geocoding tool to populate the latitude and longitude fields if they are not yet specified. And like other tables, the Status and Notes field are often used to set up scenarios. This screenshot shows the Facilities table populated with 2 depots, 1 current one in Atlanta, GA, and 1 new one in Jacksonville, FL:

Scrolling further right in the Facilities table shows almost all the same fields as those to the right on the Customers table: Operating Schedule, Operating Calendar, and Fixed & Unit Pickup & Delivery Times plus their UOM fields. These all work the same as those on the Customers table, please refer to the descriptions of them in the previous section.
The item(s) that are to be delivered to the customers from the facilities are entered into the Products table. It contains the Product Name, and again a Status and Notes fields for ease of scenario creation. Details around the Volume and Weight of the product are entered here too, which are further explained below this screenshot of the Products table where just one product “PRODUCT” has been specified:

On the Transportation Assets table, the vehicles to be used in the Hopper baseline and any scenario runs are specified. There are a lot of fields around capacities, route and stop details, delivery & pickup times, and driver breaks that can be used on this table, but there is no requirement to use all of them. Use only those that are relevant to your network and the questions you are trying to answer with your model. We will discuss most of them through multiple screenshots. Note that the UOM fields have been omitted in these screenshots. Let’s start with this screenshot showing basic asset details like name, number of units, domicile locations, and rate information:

The following screenshot shows the fields where the operating schedule of the asset, any fixed costs, and capacity of the vehicles can be entered:

Note that if all 3 of these capacities are specified, the most restrictive one will be used. If you for example know that a certain type of vehicle always cubes out, then you could just populate the Volume Capacity and Volume Capacity UOM fields and leave the other capacity fields blank.
If you scroll further right, you will see the following fields that can be used to set limits on route distance and time when using this type of vehicle. Where applicable, you will notice their UOM fields too (omitted in the screenshot):

Limits on the amount of stops per route can be set too:

A tour is defined as all the routes a specific unit of a vehicle is used on during the model horizon. Limits around routes, time, and distance for tours can be added if required:

Scrolling still further right you will see the following fields that can be used to add details around how long pickup and delivery take when using this type of vehicle. These all have their own UOM fields too (omitted in the screenshot):

The next 2 screenshots shows the fields on the Transportation Assets table where rules around driver duty, shift, and break times can be entered. Note that these fields each have a UOM field that is not shown in the screenshot:


Limits around out of route distance can be set too. Plus details regarding the weight of the asset itself and the level of CO2 emissions:


Lastly, a default cost, fixed times for admin, and an operating calendar can be specified for a vehicle in the following fields on the transportation assets table:

As a reference, these are the department of transportation driver regulations in the US and the EU. They have been somewhat simplified from these sources: US DoT Regulations and EU DoT Regulations:
Consider this route that starts from the DC, then goes to CZ1 & CZ2, and then returns to the DC:

The activities on this route can be thought of as follows, where the start of the Rest is the end of Shift 1 and Shift 2 starts at the end of the Rest:

Notes on Driver Breaks:
Except for asset fixed costs, which are set on the Transportation Assets table, and any Direct Costs which are set on the Shipments table, all costs that can be associated with a multi-stop route can be specified in the Transportation Rates table. The following screenshot shows how a transportation rate is set up with a name, a destination name and the first several cost fields. Note that UOM fields have been omitted in this screenshot, but that each cost field has its own UOM field to specify how the costs should be applied:

Scrolling further right in the Transportation Rates table we see the remaining cost fields:

Finally, a minimum charge and fuel surcharge can be specified as part of a transportation rate too:

The amount of product that needs to be delivered from which source facility/supplier to which destination customer or picked up from which customer is specified on the Shipments table. Optionally, details around pickup and delivery times, direct costs, and fixed template routes can be set on this table too. Note that the Shipments table is Transportation Asset agnostic, meaning that the Hopper transportation optimization algorithm will choose the optimal one to use from the vehicles domiciled at the source location. This first screenshot of the Shipments table shows the basic shipment details:

Here is an example of a subset of Shipments for a model that will route both pickups and deliveries:

To the right in the Shipments table we find the fields where details around shipment windows can be entered:

Still further right on the Shipments table are the fields where details around pickup and delivery times can be specified:

Finally, furthest right on the Shipments table are fields where Direct Costs, details around Template Routes and decompositions can be configured:

Note that there are multiple ways to switching between forcing Shipments and the order of stops onto a template route and letting Hopper optimize which shipments will be put on a route together and in which order. Two example approaches are:
The tables and their input fields that can optionally be populated for their inputs to be used by Hopper will now be covered. Where applicable, it will also be mentioned how Hopper will behave when these are not populated.
In the Transit Matrix table, the transport distance and time for any source-destination-asset combination that could be considered as a segment of a route by Hopper can be specified. Note that the UOM fields in this table are omitted in following screenshot:

The transport distances for any source-destination pairs that are not specified in this table will be calculated based on the latitudes and longitudes of the source and destination and the Circuity Factor that is set in the Model Settings table. Transport times for these pairs will be calculated based on the transport distance and the vehicle’s Speed as set on the Transportation Assets table or, if Speed is not defined on the Transportation Assets table, the Average Speed in the Model Settings table.
Costs that need to be applied on a stop basis can be specified in the Transportation Stop Rates table:

If Template Routes are specified on the Shipments table by using the Template Route Name and Template Route Stop Sequence fields, then the Template Routes table can be used to specify if and how insertions of other Shipments can be made into these template routes:

If a template route is set up by using the Template Route Name and Template Route Stop Sequence fields in the Shipments table and this route is not specified in the Template Routes table, it means that no insertions can be made into this template route.
In addition to routing shipments with a fixed amount of product to be delivered to a customer location, Hopper can also solve problems where routes throughout a week need to be designed to balance out weekly demand while achieving the lowest overall routing costs. The Load Balancing Demand and Load Balancing Schedules tables can be used to set this up. If both the Shipments table and the Load Balancing Demand/Schedules tables are populated, by default the Shipments table will be used and the Load Balancing Demand/Schedules tables will be ignored. To switch to using the Load Balancing Demand/Schedules tables (and ignoring the Shipments) table, the Run Load Balancing toggle in the Hopper (Transportation Optimization) Parameters section on the Run screen needs to be switched to on (toggle to the left and grey is off; to the right and blue is on):

The weekly demand, the number of deliveries per week, and, optionally, a balancing schedule can be specified in the Load Balancing Demand table:

To balance demand over a week according to a schedule, these schedules can be specified in the Load Balancing Schedules table:


In the screenshots above, the 3 load balancing schedules that have been set up will spread the demand out as follows:
In the Relationship Constraints table, we can tell Hopper what combinations of entities are not allowed on the same route. For example, in the screenshot below we are saying that customers that make up the Primary Market cannot be served on the same route as customers from the Secondary Market:

A few examples of common Relationship Constraints are shown in the following screenshot where the Notes field explains what the constraint does:

To set the availability of customers, facilities, and assets to certain start and end times by day of the week, the Business Hours table can be used. The Schedule Name specified on this table can then be used in the Operating Schedule fields on the Customers, Facilities and Transportation Assets tables. Note that the Wednesday – Saturday Open Time and Close Time fields are omitted in the following screenshot:

To schedule closure of customers, facilities, and assets on certain days, the Business Calendars table can be used. The Calendar Name specified on this table can then be used in the Operating Calendar fields on the Customers, Facilities and Transportation Assets tables:

Groups are a general Cosmic Frog feature to make modelling quicker and easier. By grouping elements that behave the same together in a group we can reduce the number of records we need to populate in certain tables since we can use the Group names to populate the fields instead of setting up multiple records for each individual element which will all have the same information otherwise. Underneath the hood, when a model that uses Groups is run, these Groups are enumerated into the individual members of the group. We have for example already seen that groups of Type = Customers were used in the Relationship Constraints table in the previous section to prevent customers in the Primary Market being served on the same route as customers in the Secondary Market. Looking in the Groups table we can see which customers are part (‘members’) of each of these groups:

Examples of other Hopper input tables where use of Groups can be convenient are:
Note that in addition to Groups, Named Filters can be used in these instances too. Learn more about Named Filters in this help center article.
The Step Costs table is a general table in Cosmic Frog used by multiple technologies. It is used to specify costs that change based on the throughput level. For Hopper, all cost fields on the Transportation Rates table, the Transportation Stop Rates table, and the Fixed Cost on the Transportation Assets table can be set up to use Step Costs. We will go through an example of how Step Costs are set up, associated with the correct cost field, and how to understand outputs using the following 3 screenshots of the Step Costs table, Transportation Rates table and Transportation Route Summary output table, respectively. The latter will also be discussed in more detail in the next section on Hopper outputs.

In this example, the per unit cost for units 0 through 20 is $1, $0.9 for units 21 through 40, and $0.85 for all units over 40. Had the Step Cost Behavior field been set to All Item, then the per unit cost for all items is $1 if the throughput is between 0 and 20 units, the per unit cost for all items is $0.9 if the throughput is between 21 and 40 units, and the per unit cost for all items is $0.85 if the throughput is over 41 units.
In this screenshot of the Transportation Rates table, it is shown that the Unit Cost field is set to UnitCost_1 which is the stepped cost with 3 throughput levels that we just discussed in the screenshot above:

Lastly, this is a screenshot of the Transportation Route Summary output table where we see that the Delivered Quantity on Route 1 is 78. With the stepped cost structure as explained above for UnitCost_1, the Unit Cost in the output is calculated as follows: 20 * $1 (for units 1-20) + 20 * $0.9 (for units 21-40) + 38 * $0.85 (for units 41-78) = $20 + $18 + $32.30 = $70.30.

When the input tables have been populated and scenarios are created (several resources explaining how to set up and configure scenarios are listed in the “2.4 Status and Notes fields” section further above), one can start a Hopper run by clicking on the Run button at the top right in Cosmic Frog:

The Run screen will come up:

Once a Hopper run is completed, the Hopper output tables will contain the outputs of the run.
As with other Cosmic Frog algorithms, we can look at Hopper outputs in output tables, on maps and analytics dashboards. We will discuss each of these in the next 3 sections. Often scenarios will be compared to each other in the outputs to determine which changes need to be made to the last-mile delivery part of the supply chain.
In the Output Summary Tables section of the Output Tables are 8 Hopper specific tables, they start with “Transportation…”. Plus, there is also the Hopper specific detailed Transportation Activity Report table in the Output Report Tables section:

Switch from viewing Input Tables to Output Tables by clicking on the round grid at the top right of the tables list. The Transportation Summary table gives a high-level summary of each Hopper scenario that has been run and the next 6 Summary output tables contain the detailed outputs at the route, asset, shipment, stop, segment, and tour level. The Transportation Load Balancing Summary output table is populated when a Load Balancing scenario has been run, and summarizes outputs at the daily level. The Transportation Activity Report is especially useful to understand when Rests and Breaks are required on a route. All these output tables will be covered individually in the following sections.
The Transportation Summary table contains outputs for each scenario run that include Hopper run details, cost details, how much product was delivered and how, total distance and time, and how many routes, stops and shipments there were in total.

The Hopper run details that are listed for each scenario include:
The next 2 screenshots show the Hopper cost outputs, summarized by scenario:


Scrolling further right in the Transportation Summary table shows the details around how much product was delivered in each scenario:

For the Quantity UOM that is shown in the farthest right column in this screenshot (eaches here), the Total Delivered Quantity, Total Direct Quantity and Total Undelivered Quantity are listed in these columns. If the Total Direct Quantity is greater than 0, details around which shipments were delivered directly to the customer can be found in the Transportation Shipment Summary output table where the Shipment Status = Direct Shipping. Similarly, if the total undelivered quantity is greater than 0, then more details on which shipments were not delivered and why are detailed in the Unrouted Reason field of the Transportation Shipment Summary output table where the Shipment Status = Unrouted.
The next set of output columns when scrolling further right repeat these delivered, direct and undelivered amounts by scenario, but in terms of volume and weight.
Still further to the right we find the outputs that summarize the total distance and time by scenario:


Lastly, the fields furthest right on the Transportation Summary output table contain details around the number of routes, assets and shipments, and CO2 emissions:

A few columns contained in this table are not shown in any of the above screenshots, these are:
The Transportation Route Summary table contains details for each route in each scenario that include cost, distance & time, number of stops & shipments, and the amount of product delivered on the route.

The costs that together make up the total Route Cost are listed in the next 11 fields shown in the next 2 screenshots:


The next set of output fields show the distance and time for each route:


Finally, the fields furthest right in the Transportation Route Summary table list the amount of product that was delivered on the routes, and the number of stops and delivered shipments on each route.

The Transportation Asset Summary output table contains the details of each type of asset used in each scenario. These details include costs, amount of product delivered, distance & time, and the number of delivered shipments.

The costs that together make up the Total Cost are listed in the next 12 fields:


The next set of fields in the Transportation Asset Summary summarize the distances and times by asset type for the scenario:


Furthest to the right on the Transportation Asset Summary output table we find the outputs that list the total amount of product that was delivered, the number of delivered shipments, and the total CO2 emissions:

The Transportation Shipment Summary output table lists for each included Shipment of the scenario the details of which asset type it is served by, which stop on which route it is, the amount of product delivered, the allocated cost, and its status.

The next set of fields in the Transportation Shipment Summary table list the total amount of product that was delivered to this stop.

The next screenshot of the Transportation Shipment Summary shows the outputs that detail the status of the shipment, costs, and a reason in case the shipment was unrouted.

Lastly, the outputs furthest to the right on the Transportation Shipment Summary output table list the pickup and delivery time and dates, the allocation of CO2 emissions and associated costs, and the Decomposition Name if used:

The Transportation Stop Summary output table lists for each route all the individual stops and their details around amount of product delivered, allocated cost, service time, and stop location information.
This first screenshot shows the basic details of the stops in terms of route name, stop ID, location, stop type, and how much product was delivered:

Somewhat further right on the Transportation Stop Summary table we find the outputs that detail the route cost allocation and the different types of time spent at the stop:

Lastly, farthest right on the Transportation Stop Summary table, arrival, service, and departure dates are listed, along with the stop’s latitude and longitude:

The Transportation Segment Summary output table contains distance, time, and source and destination location details for each segment (or “leg”) of each route.
The basic details of each segment are shown in the following screenshot of the Transportation Segment Summary table:

Further right on the Transportation Segment Summary output table, the time details of each segment are shown:

Next on the Transportation Segment Summary table are the latitudes and longitudes of the segment’s origin and destination locations:

And farthest right on the Transportation Segment Summary output table details around the start and end date and time of the segment are listed, plus CO2 emissions and the associated CO2 cost:

For each Tour (= asset schedule) the Transportation Tour Summary output table summarizes the costs, distances, times, and CO2 details.
The next 3 screenshots show the basic tour details and all costs associated with a tour:



The next screenshot shows the distance outputs available for each tour on the Transportation Tour Summary output table:

Scrolling further right on the Transportation Tour Summary table, the outputs available for tour times are listed:


If a load balancing scenario has been run (see the Load Balancing Demand input table further above for more details on how to run this), then the Transportation Load Balancing Summary output table will be populated too. Details on amount of product delivered, plus the number of routes, assets and delivered shipments by day of the week can be found in this output table; see the following 2 screenshots:


For each route, the Transportation Activity Report details all the activities that happen in chronological order with details around distance and time and it breaks down how far along the duty and drive times are at each point in the route, which is very helpful to understand when rests and short breaks are happening.
This first screenshot of the Transportation Activity Report shows the basic details of the activities:

Next, the distance, time, and delivered amount of product are detailed on the Transportation Activity Report:

Finally, the last several fields on the Transportation Activity Report details cost, and the thus far accumulated duty and drive times:

As with the other engines within Cosmic Frog, Maps are very helpful in visualizing baseline and scenario outputs. Here, we will discuss how to set up Hopper specific Maps at a high level; we will not cover all the ins and outs of maps. If you are unfamiliar with the Maps module in Cosmic Frog, then please review the “Getting Started with Maps” article in the Optilogic Help Center first. It covers how to add and configure new maps and their layers.
Visualizing Hopper routes and direct shipments on maps is achieved by adding map layers which use 1 of the following as the table name:
Using what we have discussed above and the learnings from the Getting Started with Maps help center article, we can create the following map quite easily and quickly (the model used here is one from the Resource Library, named Transportation Optimization):

The steps taken to create this map are:
Let’s also cover 2 maps of a model where both pickups and deliveries are being made, from “backhaul” and to “linehaul” customers, respectively. When setting the LIFO (Is Last In First Out) field on the Transportation Assets table to True, this leads to routes that contain both pickup and delivery stops, but all the pickups are made at the end (e.g. modeling backhaul):

Two example routes are being shown in the screenshot above and we can see that all deliveries are first made to the linehaul customers which have blue icons. Then, pickups are made at the backhaul customers which have orange icons. If we want to design interleaved routes where pickups and deliveries can be mixed, we need to set the LIFO field to False. The following screenshot shows 2 of these interleaved routes:

The above 2 screenshots use the Transportation Routes Map Layer as the table name to draw the Routes map layer, where the condition builder is used to filter for 2 of the route names.
Finally, we will go back to the Transportation Optimization model which was used for the first map screenshot in this section. The Baseline scenario in this model has 1 shipment that is being shipped directly. To visualize this on the map we add a map layer named "Direct Shipping" which uses the Transportation Shipment Summary as the table name input. On the Layer Style pane we change the color for this line layer to red. We also keep the "Routes" map layer, which is drawn from the Transportation Routes Map Layer with the default dark blue color:

In the Analytics module of Cosmic Frog, dashboards that show graphs of scenario outputs, sliced and diced to the user’s preferences, can quickly be configured. Like Maps, this functionality is not Hopper specific and other Cosmic Frog technologies use these extensively too. We will cover setting up a Hopper specific visualization, but not all the details of configuring dashboards. Please review these resources on Analytics in Cosmic Frog first if you are not yet familiar with these:
We will do a quick step by step walk through of how to set up a visualization of comparing scenario costs by cost type in a new dashboard:

The steps to set this up are detailed here, note that the first 4 bullet points are not shown in the screenshot above:
All Hopper specific Help Center articles can be found in the Hopper - Transportation Optimization section under Navigating Cosmic Frog.
There are also several models in the Resource Library that transportation optimization users may find helpful to review. How to use resources in the Resource Library is described in the help center article “How to Use the Resource Library”.
Cosmic Frog users can now visualize routes from Hopper (transportation optimization) and Neo (network optimization) on real road networks instead of straight lines. Valhalla, an open-source routing engine for OpenStreetMap, is used to calculate the waypoints.
This guide assumes familiarity with how to configure maps and their layers in Cosmic Frog. See the Getting Started with Maps help center article for the basics.
There are 2 tables available in the Table Name drop-down on the Condition Builder panel of a map layer for which waypoint routes can be generated:
In both cases the configuration is similar; first we will cover a Hopper example and then a Neo example.

Please note:


In our example case neither warning comes up, and the user turns on the Waypoint Routes option:

Processing time depends on the number of origin-destination pairs. As an indication, a larger Hopper model with ~3,000 routes took a little under 4 minutes to generate all waypoints.
Users can monitor the progress of the waypoint routes generation by opening the Model Activity panel:

Once the calculation completes, the map is updated to show the routes the waypoints have been calculated for on the road network:

In this Neo example, we will generate road network flows for the following DC to Customer flows. This represents 1,333 unique origin-destination pairs:


Once the waypoints have been calculated, the map updates and now looks as follows:

Showing supply chains on maps is a great way to visualize them, to understand differences between scenarios, and to show how they evolve over time. Cosmic Frog offers users many configuration options to customize maps to their exact needs and compare them side-by-side. In this documentation we will cover how to create and configure maps in Cosmic Frog.
In Cosmic Frog, a map represents a single geographic visualization composed of different layers. A layer is an individual supply chain element such as a customer, product flow, or facility. To show locations on a map, these need to exist in the master tables (e.g. Customers, Facilities, and Suppliers) and they need to have been geocoded (see also the How to Geocode Locations section in this help center article). Flow based layers are based on output tables, such as the OptimizationFlowSummary or SimulationFlowSummary and to draw these, the model needs to have been run so outputs are present in these output tables.
Maps can be accessed through the Maps module in Cosmic Frog:

The Maps module opens and shows the first map in the Maps list; this will be the default pre-configured “Supply Chain” map for maps the user created and most models copied from the Resource Library:

In addition to what is mentioned under bullet 4 of the screenshot just above, users can also perform following actions on maps:

As we have seen in the screenshot above, the Maps module opens with a list of pre-configured maps and layers on the left-hand side:

The Map menu in the toolbar at the top of the Maps module allows users to perform basic map and layer operations:

These options from the Map menu are also available in the context menu that comes up when right-clicking on a map or layer in the Maps list.
The Map Filters panel can be used to set scenarios for each map individually. If users want to use the same scenario for all maps present in the model, they can use the Global Scenario Filter located in the toolbar at the top of the Maps module:

Now all maps in the model will use the selected scenario, and the option to set the scenario at the map-level is disabled.
When a global scenario has been set, it can be removed using the Global Scenario Filter again:

The zoom level, how the map is centered, and the configuration of maps and their layers persist. After moving between other modules within Cosmic Frog or switching between models, when user comes back to the map(s) in a specific model, the map settings are the same as when last configured.
Now let us look at how users can add new maps, and the map configuration options available to them.

Once done typing the name of the new map, the panel on the right-hand side of the map changes to the Map Filters panel which can be used to select the scenario and products the map will be showing. If the user wants to see a side-by-side map comparison of 2 scenarios in the model, this can be configured here too:

In the screenshot above, the Comparison toggle is hidden by the Product drop-down. In the next screenshot it is shown. By default, this toggle is off; when sliding it right to be on, we can configure which scenario we want to compare the previously selected scenario to:

Please note:
Instead of setting which scenario to use for each map individually on the Map Filters panel, users can instead choose to set a global scenario for all maps to use, as discussed above in the Global Scenario Filter section. If a global scenario is set, the Scenario drop-down on the Map Filters panel will be disabled and the user cannot open it:

On the Map Information panel, users have a lot of options to configure what the map looks like and what entities (outside of the supply chain ones configured in the layers) are shown on it:

Users can choose to show a legend on the map and configure it on the Map Legend pane:

To start visualizing the supply chain that is being modelled on a map, user needs to add at least 1 layer to a map, which can be done by choosing “New Layer” from the Map-menu:

Once a layer has been added or is selected in the Maps list, the panel on the right-hand side of the map changes to the Condition Builder panel which can be used to select the input or output table and any filters on it to be used to draw the layer:

When looking at the DC to Customer Flows layer in the Maps list, we see that the Number of lines displayed in it is 1,333:

The condition used in the Condition Builder (flowtype = 'CustomerFulfillment') filters out 3,999 records in the OptimizationFlowSummary output table for the Baseline scenario. However, since each customer receives 3 different products (all from the same DC), the number of source destination pairs, and therefore the number of lines on the map, is 1,333.
We will now also look at using the Named Filters option to filter the table used to draw the map layer:

In this walk-through example, user chooses to enable the “From Salt Lake City DC” named filter:

Similar to above, these 273 filtered records result in fewer lines (91) drawn on the map, since each customer served by the Salt Lake City DC receives 3 products from it. Therefore, the number of unique origin-destination pairs is 273 / 3 = 91.
In the next layer configuration panel, Layer Style, users can choose what the supply chain entities that the layer shows will look like on the map. This panel looks somewhat different for layers that show locations (Type = Point) than for those that show flows (Type = Line). First, we will look at a point type layer (Customers):

Next, we will look at a line type layer, Customer Flows:

At the bottom of the Layer Style pane a Breakpoints toggle is available too (not shown in the screenshots above). To learn more about how these can be used and configured, please see the "Maps - Styling Points & Flows based on Breakpoints" Help Center article.
Labels and tooltips can be added to each layer, so users can more easily see properties of the entities shown in the layer. The Layer Labels configuration panel allows users to choose what to show as labels and tooltips, and configure the style of the labels:

This next screenshot shows the tooltip of a flow line on the DC to Customer Flows layer. The tooltip is configured to show the following fields from the OptimizationFlowSummary table: OriginName, DesintationName, ProductName, FlowQuantity, TransportationCost, and TransportationTime. Display Aggregates is turned on:

When pinning a tooltip, the Details View is shown on the right-hand side of the map:

Finally, this example shows a map where the labels for a customer locations layer are showing. They have been configured to use blue black as the font color, 14 as the font size, to use a background color which is set to cyan, and they are positioned below (position = bottom) the customer's map icon:

When modelling multiple periods in network optimization (Neo) models, users can see how these evolve over time using the map:

Users can now add Customers, Facilities and Suppliers via the map:

After adding the entity, we see it showing on the map, here as a dark blue circle, which is how the Customers layer is configured on this map:

Looking in the Customers table, we notice that CZ_Philadelphia has been added. Note that while its latitude and longitude fields are set, other fields such as City, Country and Region are not automatically filled out for entities added via the map:

In this final section, we will show a few example maps to give users some ideas of what maps can look like. In this first screenshot, a map for a Transportation Optimization (Hopper engine) model, Transportation Optimization UserDefinedVariables available from Optilogic’s Resource Library (here), is shown:

Some notable features of this map are:
The next screenshot shows a map of a Greenfield (Triad engine) model:

Some notable features of this map are:
This following screenshot shows a subset of the customers in a Network Optimization (Neo engine) model, the Global Sourcing – Cost to Serve model available from Optilogic’s Resource Library (here). These customers are color-coded based on how profitable they are:

Some notable features of this map are:
Lastly, the following screenshot shows a map of the Tariffs example model, a network optimization (Neo engine) model available from Optilogic’s Resource Library (here), where suppliers located in Europe and China supply raw materials to the US and Mexico:

Some notable features of this map are:
We hope users feel empowered to create their own insightful maps. For any questions, please do not hesitate to contact Optilogic support at support@optilogic.com.
Incremental Change is a powerful new capability in Cosmic Frog’s NEO engine (Network Optimization) that helps you manage the transition between two network states – such as moving from your current baseline network to an optimized future state. Rather than implementing all changes at once, Incremental Change allows you to control the pace and sequence of changes over time, making network transformations more practical and manageable.
This feature bridges the gap between theoretical optimization results and realistic implementation plans by generating a sequenced roadmap of network changes.
In this documentation, we will discuss the impact of Incremental Change, explain how to use it, and then walk through 2 demo models showing examples of the new capabilities. These models can be copied to your own Optilogic account from the Resource Library:
Please also see the following brief video for an introduction to Incremental Change and an overview of the second demo model, Incremental Change - Supplier Base Transition:
Supply chain networks rarely change overnight. Whether you are opening new distribution centers, shifting supplier bases, or adapting to seasonal demand fluctuations, real-world constraints limit how quickly you can implement changes. Organizations face practical limitations including:
Incremental Change addresses these realities by helping you answer critical questions: What is the best order of changes? What is the optimal rate of change? What tolerance levels are appropriate for different types of modifications?
Understand which network modifications deliver the greatest value earliest in your transition. Incremental Change provides clear visibility into expected savings versus the number of changes required, along with detailed change checklists for each scenario.
Create realistic, step-by-step roadmaps for moving from your current network to your target network. You will receive:
Manage network evolution in response to changing business conditions while respecting operational constraints. Balance total cost optimization against your tolerance for change, with detailed monthly implementation plans.
Cosmic Frog's Incremental Change currently supports the following network changes:
Progressive Facility Expansion: When opening a new distribution center, specify that you want to reassign up to 10 customers per month, and the system determines which customers to transition each month to maximize savings.
Seasonal Production Smoothing: For products with seasonal demand patterns, set constraints to ensure facility production never varies by more than 10% month-over-month, maintaining operational stability.
Multi-Year Facility Rollouts: Plan the optimal sequence for opening 10 facilities over five years including a detailed 2-year implementation roadmap.
Strategic Supplier Transitions: Systematically shift your supplier base. For example, progressively move sourcing out of or into a particular region – with Incremental Change determining the optimal sequence of supplier changes.
Incremental Change uses a new input table and generates 2 new output tables with different levels of detail:
Input: Use the Incremental Change Constraints table to set the change type(s) being included and your change budget for each period.
Outputs:
By leveraging Incremental Change, you can transform theoretical network optimization results into actionable, phased implementation plans that respect real-world operational constraints while maximizing value delivery.
Some additional pointers that may prove helpful are as follows:
In this example model, "Incremental Change - Production Smoothing", we will see how we can ensure that increases in production from one month to the next are limited to a certain percentage using the Incremental Change feature. This model can be copied from the Resource Library to your own account in case you want to follow along.
The following screenshot shows which input tables are populated in this example model:

This next screenshot shows the customer and facility locations on a map:

As mentioned, the demand varies quite a lot by period; the following bar chart shows the total demand by period:

Now, we will take a closer look at the new Incremental Change Constraints input table:

Please note that this table also has a Notes field, which is not shown in the screenshot. Notes fields are also often used by scenarios, for example to filter out a subset of records for which the Status needs to be switched based on the text contained in the Notes field.
The following 5 scenarios are included in this example model:

After running all 5 scenarios, we will first look at the 2 new output tables, and then at several graphs set up in the Analytics module specifically for this model.
The following screenshot shows the Optimization Incremental Change Summary output table:

The other new output table, the Optimization Incremental Change Report, contains the changes at a more detailed level. For production increase incremental change types, it indicates the change in production at the individual facilities for the periods the constraint(s) are applied. In the next screenshot, we look at the report which is filtered for the fourth scenario and only the records for Facility_01 are shown:

The Actual Change column here reflects the difference in the quantity produced between the period and the referenced (previous) period. The first record tells us that in Period2 Facility_01 produced 4,092 units less than it produced in Period1, etc.
Now, we will look at several graphs set up in the Analytics module. The names of the dashboards containing the graphs start with “Incremental Change - …”. The first one we will look at is the Production by Period graph found on the “Incremental Change – Production by Period” dashboard. Here we compare the first scenario (No Limit) with the most constrained scenario which is the second one (only 2% increase in production allowed in each subsequent month):

The blue line and numbers represent the production quantities of the first scenario (no limit). Since there are no limits set on how much production can fluctuate, the production is equal to the demand in that period to avoid any pre-build inventory and its holding costs. Since the demand varies greatly between each period, so does the production. For example, in Period2 the quantity produced is 1.45M units and in Period3 this is 5.55M units, which is a 280% increase.
The green line and numbers represent the production quantities of the second scenario (max 2% production increase). To even out the production and stay within the 2% increase restriction, we see that production in periods 1 and 2 is higher in this scenario as compared to the first one. We can deduce that inventory is being pre-built here in order to be able to fulfill the high demand in Period3. We will show the pre-build inventory levels by period in another upcoming dashboard.
The next graph, from the "Incremental Change - Demand and Production" dashboard, compares the production by period for all 5 scenarios using a bar-chart:

We see the same for scenarios 1 and 2 as we discussed for the previous graph: high fluctuations in scenario 1 and only slight increases period-over-period in scenario 2. Scenarios 3, 4, and 5 gradually allow higher increases in production (5%, 10%, and 20%), and we can see from the production outputs that in order to reduce cost of holding pre-build inventory in stock, the scenarios do show more fluctuations in the production quantity so that product is produced as close as possible in time to when the product is demanded, while adhering to the % increase limitations.
We can also illustrate this by looking at the amount of pre-build inventory across the periods for each scenario, which is shown on the “Incremental Change – Pre-build Inventory” dashboard:

We notice that the first scenario is not shown here as it does not have any pre-build inventory in any of the periods: the production in each period matches the demand in each period to avoid any pre-build holding costs. For the other scenarios, we see, as expected, that the more stringent the limitation on the fluctuation in production quantity is, the more pre-build inventory is being held. In scenarios 4 and 5 (10% and 20% increases allowed), there is no need to have pre-build inventory in Period4 for example, whereas this is needed in scenarios 2 and 3 (2% and 5% increases allowed).
We can of course also look at the costs associated with each of the 5 scenarios, using the Optimization Network Summary output table:

The costs are made up of transportation costs ($41.2M for all scenarios), in transit holding costs ($51k for all scenarios), production costs ($4.2M for all scenarios), outbound handling costs ($19.5M for all scenarios), and pre-build holding costs. Only the pre-build holding costs differ between the scenarios due to the timing of production being different because of the production increase incremental change constraints. As we saw above, no pre-build inventory was held in scenario 1, so the pre-build holding cost for this scenario is 0. Most pre-build inventory was held in the most stringent scenario, scenario #2, and therefore the pre-build holding costs are highest for this scenario, decreasing as the production increase change constraint is gradually loosened from 2 to 20%.
Finally, we will have a look at a map to see the flows from the facilities to the customers:

In this map we see all flows for all periods for scenario 3. We notice that customers source from their closest facility. Since this is a multi-period model, we can also configure the map to see the flows for just a single period or a subset of the periods, by using the fold-out periods toolbar located at the bottom-right in the screenshot.
This example demonstrates how Incremental Change can be used to determine the optimal sequence of supplier changes when transitioning from a mixed supplier base to a single-region sourcing strategy.
The model evaluates two potential transitions:
Supplier onboarding is limited to two suppliers per quarter, reflecting operational onboarding limits.
In addition, the model evaluates whether adding a new West Coast port and factory improves the network. These locations could realistically only become operational starting in 2028, which is modeled using incremental change constraints.
This example model, "Incremental Change - Supplier Base Transition", can be copied from the Resource Library if you want to follow along.
This example is based on the Global Supply Chain Strategy model from the Resource Library (here) and has been modified to demonstrate Incremental Change functionality.
This model includes two supplier regions:
Europe
China
The current supplier base consists of:
The additional suppliers are included in the model but initially inactive.
This screenshot shows the existing and potential suppliers in Europe:

The next screenshot shows the existing and potential suppliers in China:

Materials flow through the network as follows:
The model also includes potential West Coast expansion locations:
These facilities are evaluated in selected scenarios.
The following screenshot shows the port, factory, DC, and customer locations in North America, with the port to factory and factory to DC flows showing too:

First, we will cover the basic inputs in this model that were added / modified from the original Global Supply Chain Strategy model. After that we will explain the inputs that have been added to model the supplier base transition and those that enable consideration of using the new port and factory on the US west coast.
Several changes were made to the base model to support the incremental change example:

The model evaluates transitions to either an all-Europe or all-China supplier base under the following rules:
This supplier base transition is visualized on the following timeline:

After running scenarios with the above transition settings, we will also run 2 accelerated transition scenarios:
This looks as follows on a timeline:

Supplier availability is controlled using two input tables:
Supplier Capabilities
Supplier Capabilities Multi-Time Period
Three configuration sets are included:

Scenarios activate the appropriate configuration by switching the Status field to Include for the relevant records.
In the following screenshot of the Supplier Capabilities table, we see 12 (of the 21) suppliers that can supply RM4. Since the Paris supplier is the one which currently supplies this raw material, we see that the Status for that record is set to Include, whereas it is set to Exclude for all others. This is similarly set up for the other 8 raw materials. In scenarios where the supplier base is being shifted, the Status of the other records is set to Include.

The next screenshot shows the Supplier Capabilities Multi-Time Period table which is filtered for Supplier Name = Chengdu or Mainz and Product Name = RM3. Chengdu (China) is the original supplier for RM3, while Mainz (Europe) is not currently used at all and will be considered for supplying RM3 in scenarios that transition the supplier base to Europe:

To ensure meaningful dual sourcing, each raw material must be supplied by two suppliers.
To prevent trivial allocations (e.g., one supplier providing only a few units), conditional minimum flow constraints require that each active supplier provides approximately 35% of total supply. This ensures both suppliers meaningfully contribute to supply.
The 2 screenshots below show the setup for RM6 on the Flow Constraints input table:


Additional flow count constraints enforce the supplier structure:
These 2 screenshots show the flow count constraints which are used (Status switched from Exclude to Include) in the scenarios where the supplier base is shifted; they are explained underneath the second screenshot:


Incremental Change controls the rate of supplier onboarding.
Two configurations are included:

Additional constraints prevent supplier changes after the transition period.
The first 2 constraints shown in the screenshot below are for the standard transition type (Budget = 2 during transition, 0 after) and are included (Status switched to Include) in the scenarios where the supplier base is shifted. The bottom 2 constraints are used together with the one in the second record for scenarios modeling the accelerated transition (Budget =3 during transition, 0 after):

The model evaluates adding:
These facilities:
Facilities table:
Records were added for Los Angeles Port and Reno Factory. As these are currently not existing locations and will be considered for inclusion, their Facility Status = Consider and Initial State = Potential:

Facilities Multi-Time Period table:
Since we only want to consider including the west coast locations from 2028 onwards, we need to make sure that at the beginning of the model horizon they are closed. This is done by adding 2 records for 2026_Q1 to the Facilities Multi-Time Period table setting the Status for both the Reno Factory and the Los Angeles Port to Closed, see screenshot below. For the remaining periods in the model the Incremental Change Constraints (see further below in this section) take care of when the factory and port are considered for opening.

Production Policies table:
Records are added for the Reno Factory so it can manufacture all 3 finished goods, using the existing BOMs:

Transportation Policies table:
Records are added so that Los Angeles Port can receive raw materials from the ports in Rotterdam and Shanghai, the Los Angeles Port can ship these to the factory in Reno, and Reno Factory can serve the finished goods to all 7 DCs:

Incremental Change Constraints table:
Here, 2 records are added to set up the desired behavior of allowing the Reno Factory and Los Angeles Port to be added to the network from 2028_Q1 onwards.

The other 4 "Facilities Opened" records in this table are not used in this model. They were added in case users want to set up and run some scenarios themselves where the west coast port and factory are allowed to be added from 2029 (records 3 and 4) or from 2030 (records 5 and 6) onwards.
The following table shows the scenarios that were run in this example model. Initially the first 5 were run and based on the results, number 6 and 7 were added and run too:

The Scenarios module of this example model looks as follows:

For running the scenarios, 2 of the technology parameters were changed from their defaults:

After running all scenarios, it is time to examine the outputs. We will start by looking at the new Optimization Incremental Change Summary and Optimization Incremental Change Report output tables and then visualize the supplier base changes on maps and a timeline grid. For these, we will mostly focus on scenarios 4 and 5 which show all incremental change constructs we have used in the model. Finally, we will also have a look at the financial impact and compare all scenarios.
The Optimization Incremental Change Summary table shows how much of each change budget is used in each period. Two screenshots of this output table are shown next; in both, the table is filtered for scenario number 5 (field not shown) where the supplier base is moved to Europe and adding the west coast locations to the network is considered:

During each of the first 3 quarters of the transition period 2 suppliers were added, then there is a pause in adding suppliers, and towards the end of the transition period, another 9 suppliers are added, essentially as late as possible. We can interpret this as:
The next screenshot shows the same table, still filtered for scenario 5, but now the records for the Facilities Opened change type constraints are shown:

We see that the budget was 2 for each of the periods in the 2028 – 2031 timeframe, but the Actual Change column indicates that no change (0) happened in any of those periods. In this scenario, it is not beneficial to start using the west coast port and factory anytime from Q1 2028 onwards.
Next, we will cover the Optimization Incremental Change Report output table. This table has a record for each incremental change that was made, including details on what the change was. This table is filtered for scenario 4 (field not shown) where the supplier base is moved to China and the west coast locations are considered (note that records beyond Q1 2029 are not shown here):

From the first 4 records for Q1 of 2027, we can tell that suppliers in Hefei and Taiyuan were added to the supplier base, while suppliers in Lyon and Madrid were removed. Similarly, 2 suppliers are added and removed in each of the next 2 quarters. Then in Q1 of 2028, the Reno Factory and Los Angeles Port are opened, which is apparently beneficial to do as early as possible when moving the supplier base to China. This is due to the lower transportation costs because of the shorter distance from Shanghai to Los Angeles as compared to going to the Charleston or Altamira ports.
In summary:
The next map shows the final configuration of the supplier base for scenario 4 where it is moved entirely to China. Notice that we have used the Period selector bar (at the top of the map) to show the map for Q1 of 2030 when the transition is complete. You can use this period selector to step through the periods to see the gradual changes over time.

The next map is also for scenario 4 but now showing the North American locations and flows (except for the DC – Customer flows) for the same period of Q1 2030. We see that the Los Angeles Port and Reno Factory are being used in this scenario.

Similar to the first map shown above, the next map shows the final supplier configuration in Q1 2030 for scenario 5, where the whole base is moved to Europe:

The next 2 timeline grids on the Supplier Transition Timeline dashboard (Analytics module) are generated from a custom table “timeline_rm_bysupplier_byperiod”, which in turn was generated from the Optimization Supply Summary output table. The appendix describes the steps to create the custom table and how to re-create it in case (new) scenarios are added / adjusted and run.
The first screenshot is for scenario 4, the second for scenario 5. They show which supplier supplies which RM when. Note that Q1-Q3 of 2026 are not included as the supplier configuration is equal to the Q4 2026 configuration. Similarly, the timeframe of Q2 2030 through Q4 2031 is not included in this dashboard either, since the supplier configuration is equal to the Q1 2030 configuration. Not all suppliers and period columns are shown in the screenshots, users can scroll horizontally and vertically in the dashboard to examine the full timeline.

Some things we notice here for scenario 4 are:

Some things we notice examining the above grid for scenario 5 are:
Since we are interested in looking at the impact of requiring the supplier base to be transitioned over sooner, scenarios 6 and 7 were run too where the budget for adding suppliers is set to 3 per quarter. We know from scenarios 4 and 5 that the west coast locations are only used when moving the supplier base to China, therefore we run scenario 6 (where we speed up the transition to the China supplier base) with these locations still considered. We do not consider them in scenario 7 where we speed up the transition to the Europe supplier base.
We will now have a look at how the costs compare for all 7 scenarios that were run with the next 2 screenshots of the Optimization Network Summary output table and the screenshot of the “Financials: Scenario Cost Comparison” chart. The latter can be found on the Scenario Comparison dashboard in the Analytics module:



Comparing all scenarios reveals:
Finally, the following 4 charts for scenarios 4-7 which show the number of suppliers by region over time can be found in the “Suppliers by Region – Scenario Comparison” dashboard in the Analytics module:

Please do not hesitate to contact Optilogic support at support@optilogic.com in case of questions or feedback.
To create the timeline grid on the Supplier Transition Timeline dashboard, we first created a custom table named timeline_rm_bysupplier_byperiod, where we added columns for scenarioname, supplier, and 1 column for each period in the model. The latter we named q1_2026, q2_2026, etc. rather than 2026_Q1, 2026_Q2, etc. (which is how the periods in the model are named) as column names in custom tables cannot start with a digit.
Next, this table was populated from the outputs in the Optimization Supply Summary output table, using a SQL Query, which is saved in the model and can be used to refresh the custom table if scenarios are modified/added and (re-)run. For this we use the SQL Editor application on the Optilogic platform:

The following screenshot shows the central part and right-hand side panel of the SQL Editor application:

Please note that this custom table, the Supplier Transition Timeline dashboard, and SQL Query to populate it are specific to how this model is set up. All three will need to be updated if the model is changed in certain ways. For example, when adding/removing/renaming periods or if it becomes possible for 1 supplier to supply more than 1 raw material within a period.
The Multiple Compartments (MC) feature allows a single transportation asset (e.g., truck, trailer) to be divided into independent compartments, each with its own capacity and compatibility constraints. This enables Hopper to produce more accurate, feasible, and operationally realistic transportation plans.
📝 Note: This article covers an advanced Hopper feature. If you are new to Hopper, review the Getting Started with Hopper guide first.
Get up and running in 4 steps:
⚠️ Important
Every asset that omits the Compartment Configuration Name field is still modeled as a single-compartment asset. Only assets with a linked configuration are switched to compartment-level validation.
Use this feature when one or more of your transportation assets have physically separate sections that must be loaded independently. Common examples:
Without Multiple Compartments, Hopper checks only that the total shipment fits the total asset capacity. This can produce plans that look valid in the optimizer but cannot be executed operationally because individual compartments are overloaded or contain incompatible products.

Three changes have been made to Hopper's data model: one new input table, one new field on an existing input table, and one new field on an existing output table.
This table defines the structure and rules of the compartments that make up each configuration. Add one row per compartment; rows with the same Compartment Configuration Name form a single configuration.


💡 Tip
When multiple capacity fields (Quantity, Volume, Weight) are populated, Hopper enforces all of them simultaneously. A shipment is only assigned to a compartment if it fits within every constraint.📝 Note
Allowed Product Name accepts four value types: (1) blank — any product allowed; (2) a single product name — only that product; (3) a group name — only members of that group; (4) a named filter — only products matching the filter criteria.
A new Compartment Configuration Name field has been added to the Transportation Assets table. Populate it with the name of a configuration defined in the Compartment Configurations table to enable compartment-level modeling for that asset.

Effect on Hopper's solver:
💡 Tip
To compare single-compartment vs. multi-compartment results for the same asset, use one of these approaches:
1) Scenario items approach: add 1 record for the asset in the Transportation Assets table and set its Configuration Name to the multi-compartment configuration to be used. Then setup a scenario for the single-compartment run with an item that blanks the Configuration Name field (action: configurationname = '').
2) Dual-asset approach: add two records for the same physical asset under different names in the Transportation Assets table — one with the configuration name populated, one with it blank. Change the value of the Status field (Include/Exclude) to activate one at a time via scenario items.The Transportation Optimization Shipment Summary output table now includes a Compartment Name column. For any shipment transported on a multi-compartment asset, this column shows which compartment it was assigned to.
📝 Note
Compartment Name is blank for shipments on assets that do not use a compartment configuration (i.e., single-compartment assets).
This field enables:

The following example runs two scenarios with the same "MK Artic Multi Temp" asset to illustrate the difference:
Both scenarios have other single-compartment assets available. In each scenario, the MK Artic Multi Temp asset is used for one route. The charts below show how much of each product is on board at each stop along that route (stops are ordered by route sequence on the x-axis; the asset is loaded at Milton Keynes and Chelmsford depots and delivers to the CZ locations).


Key observations:
⚠️ Important
Higher throughput in Scenario A is not a benefit — it reflects an infeasible plan. Scenario B costs more and delivers less on this route because it models reality accurately. The higher cost is expected and correct.
Multiple Compartments moves Hopper from aggregate capacity modeling to granular compartment-level modeling. This means:
Questions? Contact the Optilogic support team at support@optilogic.com.
Scenarios let you rapidly explore "what-if" questions against an existing Cosmic Frog model. Define one or more data changes (scenario items), run the scenarios, then compare outputs - all without altering your baseline input data.
Follow these five steps to run your first scenario:
💡 Tip: Use Leapfrog (Cosmic Frog's AI assistant) to create scenarios and items from plain-language prompts - no manual configuration needed.
A scenario defines one or more input-table changes to apply before running a solve. Common examples include:
In the context of this documentation, we mean the following with scenario and scenario item:
In other words: a scenario without any scenario items uses all the data in the input tables as is (often called Baseline); most scenarios will contain 1 or more scenario items to test certain changes as compared to a baseline.
Open the Scenarios module from the Module menu. A freshly opened module looks like this:

The Scenario drop-down (top of module) provides quick access to common actions:

Each scenario is associated with one engine. To change it, select the scenario and use the radio buttons in the central panel:

📝 Note: Running with multiple technologies
You can solve the same scenario with more than one engine sequentially: assign the first technology → run → change technology → run again. Be aware that any scenario edits between runs may cause results to differ for subsequent runs.
💡 Tip: Dendro workflow
To optimize inventory policies with Dendro: first build and validate a Throg (simulation) scenario, then switch its technology to Dendro and run.
Right-click an existing scenario or the Scenarios folder and choose New Scenario, or use the New Scenario option from the Scenario drop-down menu. Enter a name when prompted:

Select the target scenario, then right-click → New Item (or use the Scenario drop-down). Name the item - its configuration panel opens automatically:

After selecting the table, specify the change in the Actions field. Intelli-type suggests column names as you type:

Once the column to change has been typed in, we can set its new value. In our example we want to set the value of the status column to Exclude:

Intelli-type also validates syntax. Incorrect quote style (need to use single quotes, not double quotes):

Unrecognized column name:

📝 Note: For full Actions syntax, see the Writing Syntax for Actions Help Center article.
By default, a scenario item's action applies to every record in the selected table. Add a filter to restrict which records are changed. Two methods are available:
If Named Filters exist for the selected table, you can apply one directly to the scenario item:

After selecting a filter, the Filter Grid Preview updates to show exactly which records will be affected:

💡 Tip: Why prefer Named Filters?
Named Filters are pre-validated - you have already confirmed they select the right records when creating the filter. The Condition Builder requires you to write syntax manually, which is more error-prone. Named Filters also show a record preview. (Preview for Condition Builder is coming soon.)
📝 Note: One Named Filter per scenario item
Each scenario item supports a maximum of one Named Filter. If you need to combine multiple filter conditions, create a new Named Filter that merges all required conditions.
Use the Condition Builder when no applicable Named Filter exists, or for ad hoc conditions:

📝 Note: For condition syntax, see the Writing Syntax for Conditions Help Center article.
When multiple items modify the same column in the same table, they execute top-to-bottom. Order matters. Consider the following 2 scenarios where both scenario items are applied to the Quantity column in the Customer Demand table:

💡 Tip: Drag items up or down within a scenario to reorder them.
A single scenario item can be shared across multiple scenarios. The Item Assignments panel (right side) appears automatically when an item is selected:

Switch to the Item Information tab on the same panel to edit the item's name or add a description:

The Scenario Assignments panel (right side) appears when a scenario is selected, showing all available items and which are assigned:

Switch to the Scenario Information tab to edit the scenario name or add a description:

The scenarios list includes several navigation and status indicators that become useful as your model grows:


Click the green Run button (top right in Cosmic Frog), select the scenario(s) to solve, and configure technology parameters. See the Running Models & Scenarios in Cosmic Frog Help Center article for full details.
Sensitivity @ Scale automates demand-quantity and transportation-cost sensitivity analysis with a single click. See the Sensitivity at Scale Scenarios Help Center article for more information.
In addition, there are three S@S-related utilities available in the Utilities module:

📝 Note: See the How to Use & Create Cosmic Frog Model Utilities Help Center article for details on using and building utilities.
Select one or more scenarios or items, then right-click ? Delete or use the Delete option from the Scenario drop-down menu at the top of the module. Key behaviors to keep in mind:

Deleting removes only the two scenarios. Items used exclusively by those scenarios move to Unassigned Items; items shared with other scenarios remain untouched.

Deleting removes both selected scenarios and all 6 selected items - including from any other scenarios that use them. The one unselected item in these 2 scenarios, "Add Inventory capacity at CDCs", remains.
⚠️ Important: Deleting a scenario item removes it from all scenarios that it is assigned to, not just the selected scenario. Review item assignments before deleting.
Use Delete Scenario Results (context menu or Scenario drop-down) to clear output data for one or more scenarios without removing the scenarios themselves:

Think through scenario naming conventions ahead of time. You can for example:
Most input tables include Status and Notes fields. A powerful scenario pattern is:
This keeps all data in the model without interfering with the Baseline.
Custom columns let you store alternative values in a table and reference them in scenario items, e.g.:
The Copy Scenarios utility (Utilities module → Copy to a Model) copies a single scenario or all scenarios from one model to another, including all items and assignments.
Use Leapfrog for scenario and item creation, and also for manipulating scenario-specific data.
Leapfrog can create scenarios and scenario items from a plain-language description - ideal for quickly spinning up variations without manually configuring each item. See the specific section on this in the Getting Started with Leapfrog AI Help Center article.
🔧 Leapfrog Use Case: 2026 Demand Projections from 2025 Numbers
Model 2026 demand from 2025 quantities using a custom growth projections table. Leapfrog can be utilized to set this up, so no external tool needs to be used:
Happy scenario modeling! As always, please contact our Support team on support@optilogic.com for any questions or feedback.
The Run Python task allows you to execute Python scripts within a DataStar macro. You select a script, define its inputs (arguments), and run it as part of your workflow. This is ideal when built-in tasks are insufficient or when you want to reuse existing Python logic.
The Run Python task is especially useful when:
This walkthrough uses the end state of the DataStar Quick Start: Creating a Task using Natural Language guide as a starting point. At that point, a raw_shipments table has been imported and a Run SQL task has produced a customers table with unique customers. We will use a Run Python task to transform customer names from the format CZ1, CZ10, CZ100 to Cust_0001, Cust_0010, Cust_0100 - ensuring alphabetical sort matches customer number order and aligning the prefix with other data sources. The transformation steps are:
The screenshot below shows the "Change customer names" Run Python task added to the macro:

Once a Python task is added to the macro canvas, its configuration tab opens on the right:

After clicking Use File with update_customer_names.py selected, the configuration updates as follows:

Before configuring arguments, we will cover the update_customer_names.py script. We can review its arguments so we can verify auto-detection is correct and look at the script body to understand what it does. You can copy the full script text from the Appendix.

With the script understood, let us use Detect arguments and configure the task:

The Notes section is the final configuration area. It is especially valuable for complex tasks or collaborative projects, enabling users to quickly understand what the task does. Formatting options are available above the text box:

Before running the task, we examine the customers table in the sandbox:

Now run the "Change customer names" task (hover over it on the macro canvas and click the play button) and examine the results:

The Run Manager logs are useful for monitoring progress and troubleshooting:

Instead of using an existing file, users can click the Create File button in the Code File area (see bullet 10 underneath the first screenshot of the Code File section) to create a new script directly on the Optilogic platform:

Templates are pre-built scripts available to all users from the Resource Library, covering common import/export patterns. To browse them, open the Resource Library application, filter for DataStar resources (button at the right top) and the Script tag. Clicking a resource also shows available documentation, which can be copied to your Optilogic account or downloaded.



The Python base image used by the Run Python task contains the Python libraries most used in conjunction with DataStar. If your script uses a library that is not included in this base image, you need to create a requirements.txt file and place it in the same location as your Python script. In this file, you need to list the names of the libraries (without quotes), 1 library name per line.
If your script uses a library that is not part of the base image, the Job Error Log of the Python task run will contain the following error: "ModuleNotFoundError: No module named '<module name>'".
Keep the following in mind when developing scripts for Run Python tasks:
As always, our Support team is happy to help with any questions; they can be reached at support@optilogic.com.
Copy the script below into a .py file in your Optilogic account (via the Lightning Editor) and modify it as needed:
import argparse
from datastar import *
def main():
parser = argparse.ArgumentParser(description="Script to update names of customers using format of CZ1, CZ10, etc. \
to Cust_0001, Cust_0010, etc; copies the entire original table into a \
new table (new_<original table name>) with just the customer names updated.")
parser.add_argument("--project", required=True, help="Name of the DataStar project")
parser.add_argument("--original_table", required=True, help="Original table with the customer name field to be updated")
parser.add_argument("--column_name", required=True, help="Name of the field containing the name of the customer")
args = parser.parse_args()
project = Project.connect_to(args.project)
sandbox = project.get_sandbox()
df_table = sandbox.read_table(args.original_table)
df_new_table = df_table
df_new_table[args.column_name] = df_new_table[args.column_name].map(lambda x: x.lstrip('CZ'))
df_new_table[args.column_name] = df_new_table[args.column_name].str.zfill(4)
df_new_table[args.column_name] = 'Cust_' + df_new_table[args.column_name]
sandbox.write_table(df_new_table, "new_" + args.original_table)
print(f"new table new_{args.original_table} created with updated customer names")
if __name__ == "__main__":
main()
The following link provides a downloadable (excel) template describing the fields included in the output tables for Neo (Optimization), Throg (Simulation), Triad (Greenfield), and Hopper (Routing).
Anura 2.8 is the current schema.
A downloadable template describing the fields in the input tables can be downloaded from the Downloadable Anura Data Structure - Inputs Help Center article.
The best way to understand modeling in Cosmic Frog is to understand the data model and structure. The following link provides a downloadable (Excel) template with the documentation and explanation for every input table and field in the modeling schema.
A downloadable template describing the fields in the output tables can be downloaded from the Downloadable Anura Data Structure - Outputs Help Center article.
For a brief review of how to use the template file, please watch the following video.
If you are seeing a "Something went wrong" error when trying to open a model in Cosmic Frog — for example, Cannot read properties of undefined (reading 'call') — this is likely caused by a cached version of the application conflicting with a recent platform update. Follow the steps below in order to resolve it.
A hard refresh forces the browser to bypass its cache and reload all files directly from the server. It clears the cache for that specific page only and ensures the latest version is loaded. This is the quickest fix for pages that have not updated properly and should be tried first.
Press the following keyboard shortcut to perform a hard refresh:
Once the page has reloaded, try opening your model again.
If the hard refresh does not resolve the issue, clearing your full browser cache will remove all stored files and force the browser to fetch the latest version of Cosmic Frog from the server.
Press the following keyboard shortcut to open the Clear Browsing Data menu:
In the menu that appears:
Once cleared, refresh the Cosmic Frog page and try opening your model again.
If neither step resolves the problem, please contact the Optilogic support team. Include your browser type and version, and a screenshot of the error if possible.
Email: support@optilogic.com
In this quick start guide we will walk-through importing a CSV file into the Project Sandbox of a DataStar project. The steps involved are:
Our example CSV file is one that contains historical shipments from May 2024 through August 2025. There are 42,656 records in this Shipments.csv file, and if you want to follow along with the steps below you can download a zip-file containing it here (please note that the long character string at the beginning of the zip's file name is expected).
Open the DataStar application on the Optilogic platform and click on the Create Data Connection button in the toolbar at the top:

In the Create Data Connection form that comes up, enter the name for the data connection, optionally add a description, and select CSV Files from the Connection Type drop-down list:

If your CSV file is not yet on the Optilogic platform, you can drag and drop it onto the “Drag and drop” area of the form to upload it to the /My Files/DataStar folder. If it is already on the Optilogic platform or after uploading it through the drag and drop option, you can select it in the list of CSV files. Once selected it becomes greyed out in the list to indicate it is the file being used; it is also pinned at the top of the list with darker background shade so users know without scrolling which file is selected. Note that you can filter this list by typing in the Search box to quickly find the desired file. Once the file is selected, users can optionally configure additional settings available on the right-hand side of the form, see the CSV File Data Connection help center article for details on these configuration options. Finally, clicking on the Add Connection button will create the CSV connection:

After creating the connection, the Data Connections tab on the DataStar start page will be active, and it shows the newly added CSV connection at the top of the list (note the connections list is shown in list view here; the other option is card view):

You can either go into an existing DataStar project or create a new one to set up a Macro that will import the data from the Historical Shipments CSV connection we just set up. For this example, we create a new project by clicking on the Create Project button in the toolbar at the top when on the start page of DataStar. Enter the name for the project, optionally add a description, change the appearance of the project if desired by clicking on the Edit button, and then click on the Add Project button:

After the project is created, the Projects tab will be shown on the DataStar start page. Click on the newly created project to open it in DataStar. Inside DataStar, you can either click on the Create Macro button in the toolbar at the top or the Create a Macro button in the center part of the application (the Macro Canvas) to create a new macro which will then be listed in the Macros tab in the left-hand side panel. Type the name for the macro into the textbox:

When a macro is created, it automatically gets a Start task added to it. Next, we open the Tasks tab by clicking on tab on the left in the panel on the right-hand side of the macro canvas. Click on Import and drag it onto the macro canvas:

When hovering close to the Start task, it will be suggested to connect the new Import task to the Start task. Dropping the Import task here will create the connecting line between the 2 tasks automatically. Once the Import task is placed on the macro canvas, the Configuration tab in the right-hand side panel will be opened. Here users can enter the name for the task, select the data connection that is the source for the import (the Historical Shipments CSV connection), and the data connection that is the destination of the import (a new table named “rawshipments” in the Project Sandbox):

If not yet connected automatically in the previous step, connect the Import Raw Shipments task to the Start task by clicking on the connection point in the middle of the right edge of the Start task, holding the mouse down and dragging the connection line to the connection point in the middle of the left edge of the Import Raw Shipments task. Next, we can test the macro that has been set up so far by running it: either click on the green Run button in the toolbar at the top of DataStar or click on the Run button in the Logs tab at the bottom of the macro canvas:

You can follow the progress of the Macro run in the Logs tab and once finished examine the results on the Data Connections tab. Expand the Project Sandbox data connection to open the rawshipments table by clicking on it. A preview of the table of up to 10,000 records will be displayed in the central part of DataStar:

Exciting tools that drastically shorten the time spent wrangling data, building supply chain models for Cosmic Frog, and analyzing outputs of these models are now available on the Optilogic platform.
This documentation briefly explains how to access these AI Agents and Utilities, lists the available tools with a short description of each, and provides links to detailed documentation for several of these tools.
Before we dive into how to access the AI Agents & Utilities, here are a few links you may find helpful:
Currently, the available Agents and Utilities are accessed by using Run Utility tasks in DataStar. At a high level, the steps are as follows (screenshots follow beneath):
Your macro canvas will look similar to the following screenshot after step #4:

After adding a task, its configuration tab is automatically shown on the right-hand side. Give the task a name, and then select the Agent or Utility you want to use from the list of available Agents/Utilities in the Select Utility section. You can also use the Search box to quickly find any Agent/Utility that contains certain text in its name or description. Hover over the description of an Agent/Utility to see the full description in case it is not entirely visible:

Once an Agent/Utility has been selected by clicking on it, the Configure Utility section becomes available. The inputs here will differ based on the Agent/Utility that has been selected. In the next screenshot the Configure Utility section of the Duplicate Macro utility is shown:

Provide the inputs for at least the required parameters, and if desired for any optional ones. Note that hovering over a blue question mark icon will bring up a hover box with a description of the parameter. For example, hovering over the blue question mark of the Source Macro Name parameter brings up "Name of the macro to duplicate (case-sensitive)".
The following AI Agents and Utilities are currently available. More are being added as they come available. For each a short description is given and for those that have more detailed documentation to go with them, a link to this documentation is included.
Please watch this 5-minute video for an overview of DataStar, Optilogic’s new AI-powered data application designed to help supply chain teams build and update models & scenarios and power apps faster & easier than ever before!
For detailed DataStar documentation, please see Navigating DataStar on the Help Center.
This documentation covers how to create and configure DataStar CSV File data connections.
After opening DataStar, you can use the Create Data Connection button to add new data connections which can be used by all your DataStar projects:

On the right-hand side of the Create Data Connection form, configuration options for the creation of the CSV File data connection are available. These will be covered now using the following screenshots.

Note that when making any changes in these parameters the Data Preview below is immediately updated to take those changes into account. This is helpful to ensure that the parameters are set correctly.
Next, we will look at the Column Selection area, which was collapsed (the default) in the previous screenshot, but is expanded in the next one:

The drop-down list that comes up when clicking on the caret icon in a data type field is shown in this screenshot:

Just as with the Parameters section, making changes in this Column Selection section updates the Data Preview automatically.
Finally, the Data Preview section shows a preview of the data as it is currently configured based on the Parameters and Column Selection sections:

Once the configuration of the data connection is finalized, click on the blue Add Connection button, shown in the first screenshot above, to create the CSV data connection.
Learn all about DataStar from the Navigating DataStar section on Optilogic's Help Center.
DataStar is Optilogic’s new AI-powered data product designed to help supply chain teams build and update models & scenarios and power apps faster & easier than ever before. It enables users to create flexible, accessible, and repeatable workflows with zero learning curve—combining drag-and-drop simplicity, natural language AI, and deep supply chain context.
Today, up to an estimated 80% of a modeler's time is spent on data—connecting, cleaning, transforming, validating, and integrating it to build or refresh models. DataStar drastically shrinks that time, enabling teams to:
The 2 main goals of DataStar are 1) ease of use, and 2) effortless collaboration, these are achieved by:
In this documentation, we will start with a high-level overview of the DataStar building blocks. Next, creating projects and data connections will be covered before diving into the details of adding tasks and chaining them together into macros, which can then be run to accomplish the data goals of your project.
Please see this "Getting Started with DataStar: Application Overview" video for a quick 5-minute overview of DataStar.
Before diving into more details in later sections, this section will describe the main building blocks of DataStar, which include Data Connections, Projects, Macros, and Tasks.
Since DataStar is all about working with data, Data Connections are an important part of DataStar. These enable users to quickly connect to and pull in data from a range of data sources. Data Connections in DataStar:
Connections to other common data resources such as MySQL, OneDrive, SAP, and Snowflake will become available as built-in connection types over time. Currently, these data sources can be connected to by using scripts that pull them in from the Optilogic side or using ETL tools or automation platforms that push data onto the Optilogic platform. Please see the "DataStar: Data Integration" article for more details on working with both local and external data sources.
Users can check the Resource Library for the currently available template scripts and utilities. These can be copied to your account or downloaded and after a few updates around credentials, etc. you will be able to start pulling data in from external sources:

Projects are the main container of work within DataStar. Typically, a Project will aim to achieve a certain goal by performing all or a subset of importing specific data, then cleansing, transforming & blending it, and finally publishing the results to another file/database. The scope of DataStar Projects can vary greatly, think for example of following 2 examples:
Projects consist of one or multiple macros which in turn consist of 1 or multiple asks. Tasks are the individual actions or steps which can be chained together within a macro to accomplish a specific goal.
The next screenshot shows an example Macro called "Transportation Policies" which consists of 7 individual tasks that are chained together to create transportation policies for a Cosmic Frog model from imported Shipments and Costs data:

Every project by default contains a Data Connection named Project Sandbox. This data connection is not global to all DataStar projects; it is specific to the project it is part of. The Project Sandbox is a Postgres database where users generally import the raw data from the other data connections into, perform transformations in, save intermediate states of data in, and then publish the results out to a Cosmic Frog model (which is a data connection different than the Project Sandbox connection). It is also possible that some of the data in the Project Sandbox is the final result/deliverable of the DataStar Project or that the results are published into a different type of file or system that is set up as a data connection rather than into a Cosmic Frog model.
The next diagram shows how Data Connections, Projects, and Macros relate to each other in DataStar:

As referenced above too, to learn more about working with both local and external data, please see this "DataStar: Data Integration" article.
On the start page of DataStar, the user will be shown their existing projects and data connections. They can be opened, or deleted here, and users also have the ability to create new projects and data connections from this start page.
The next screenshot shows the existing projects in card format:

New projects can be created by clicking on the Create Project button in the toolbar at the top of the DataStar application:

If on the Create Project form a user decides they want to use a Template Project rather than a new Empty Project, it works as follows:

These template projects are also available on Optilogic's Resource Library:

After the copy process completes, we can see the project appear in the Explorer and in the Project list in DataStar:

Note that any files needed for data connections in template projects copied from the Resource Library can be found under the "Sent to Me" folder in the Explorer. They will be in a subfolder named @datastartemplateprojects#optilogic (the sender of the files).
The next screenshot shows the Data Connections that have already been set up in DataStar in list view:

New data connections can be created by clicking on the Create Data Connection button in the toolbar at the top of the DataStar application:

The remainder of the Create Data Connection form will change depending on the type of connection that was chosen as different types of connections require different inputs (e.g. host, port, server, schema, etc.). In our example, the user chooses CSV Files as the connection type:

In our walk-through here, the user drags and drops a Shipments.csv file from their local computer on top of the Drag and drop area:

Now let us look at a project when it is open in DataStar. We will first get a lay of the land with a high-level overview screenshot and then go into more detail for the different parts of the DataStar user interface:

Next, we will dive a bit deeper into a macro:

The Macro Canvas for the Customers from Shipments macro is shown in the following screenshot:

In addition to the above, please note following regarding the Macro Canvas:

We will move on to covering the 2 tabs on the right-hand side pane, starting with the Tasks tab. Keep in mind that in the descriptions of the tasks below, the Project Sandbox is a Postgres database connection. The following tasks are currently available:

Users can click on a task in the tasks list and then drag and drop it onto the macro canvas to incorporate it into a macro. Once added to a macro, a task needs to be configured; this will be covered in the next section.
When adding a new task, it needs to be configured, which can be done on the Configuration tab. When a task is newly dropped onto the Macro Canvas its Configuration tab is automatically opened on the right-hand side pane. To make the configuration tab of an already existing task active, click on the task in the Macros tab on the left-hand side pane or click on the task in the Macro Canvas. The configuration options will differ by type of task, here the Configuration tab of an Import task is shown as an example:


Please note that:
The following table provides an overview of what connection type(s) can be used as the source / destination / target connection by which task(s), where PG is short for a PostgreSQL database connection and CF for a Cosmic Frog model connection:

Leapfrog in DataStar (aka D* AI) is an AI-powered feature that transforms natural language requests into executable DataStar Update and Run SQL tasks. Users can describe what they want to accomplish in plain language, and Leapfrog automatically generates the corresponding task query without requiring technical coding skills or manual inputs for task details. This capability enables both technical and non-technical users to efficiently manipulate data, build Cosmic Frog models, and extract insights through conversational interactions with Leapfrog within DataStar.
Note that there are 2 appendices at the end of this documentation where 1) details around Leapfrog in DataStar's current features & limitations are covered and 2) Leapfrog's data usage and security policies are summarized.


Leapfrog’s response to this prompt is as follows:

DROP TABLE IF Exists customers;
CREATE TABLE customers AS
SELECT
destination_store AS customer,
AVG(destination_latitude) AS latitude,
AVG(destination_longitude) AS longitude
FROM rawshipments
GROUP BY destination_storeTo help users write prompts, the tables present in the Project Sandbox and their columns can be accessed from the prompt writing box by typing an @:


This user used the @ functionality repeatedly to write their prompt as follows, which helped to generate their required Run SQL task:

Now, we will also have a look at the Conversations tab while showing the 2 tabs in Split view:

Within a Leapfrog conversation, Leapfrog remembers the prompts and responses thus far. Users can therefore build upon previous questions, for example by following up with a prompt along the lines of “Like that, but instead of using a cutoff date of August 10, 2025, use September 24, 2025”.
Additional helpful DataStar Leapfrog links:
Users can run a Macro by selecting it and then clicking on the green Run button at the right top of the DataStar application:

Please note that:

Next, we will cover the Logs tab at the bottom of the Macro Canvas where logs of macros that are running/have been run can be found:

When a macro has not yet been run, the Logs tab will contain a message with a Run button, which can also be used to kick off a macro run. When a macro is running or has been run, the log will look similar to the following:

The next screenshot shows the log of a run of the same macro where the third task ended in an error:

The progress of DataStar macro and task runs can also be monitored in the Run Manager application where runs can be cancelled if needed too:

Please note that:
In the Data Connections tab on the left-hand side pane the available data connections are listed:

Next, we will have a look at what the connections list looks like when the connections have been expanded:

The tables within a connection can be opened within DataStar. They are then displayed in the central part of DataStar where the Macro Canvas is showing when a macro is the active tab.
Please note:

A table can be filtered based on values in one or multiple columns:


Columns can be re-ordered and hidden/shown as described in the Appendix; this can be done using the Columns fold-out pane too:

Finally, filters can also be configured from a fold-out pane:

Users can explore the complete dataset of connections with tables larger than 10k records in other applications on the Optilogic platform, depending on the type of connection:
Here is how to find the database and table(s) of interest in SQL Editor:

Here are a few additional links that may be helpful:
We hope you are as excited about starting to work with DataStar as we are! Please stay tuned for regular updates to both DataStar and all the accompanying documentation. As always, for any questions or feedback, feel free to contact our support team at support@optilogic.com.
The grids used in DataStar can be customized and we will cover the options available through the screenshot below. This screenshot is of the list of CSV files in user's Optilogic account when creating a new CSV File connection. The same grid options are available on the grid in the Logs tab and when viewing tables that are part of any Data Connections in the central part of DataStar.

Leapfrog's brainpower comes from:
All training processes are owned and managed by Optilogic — no outside data is used.
When you ask Leapfrog a question:
Your conversations (prompts, answers, feedback) are stored securely at the user level.
We are excited to announce the upcoming release of Anura schema version 2.8.19, with enhancements to our engines and model run options. The migration rollout will start soon and is expected to be completed by the end of March.
All updates in this release are additive — there are no breaking changes.
Neo
Hopper
Other
Model Run Options
MaxNumberOfSourcesToConsiderForMultiStopRoutes DendroTimeoutCYCLO - Multi-Echelon Inventory Optimization
CompartmentConfigurationsInventorySettingsUtilityCurvesInventoryNetworkSummaryInventorySafetyStockSummaryInventoryValidationErrorReportTransportationAssets.CompartmentConfigurationNameTransportationPolicies.MaximumServiceTimeTransportationPolicies.MaximumServiceTimeUOMTransportationPolicies.MinimumServiceTimeTransportationPolicies.MinimumServiceTimeUOMOptimizationProcessSummary.DisposalBehaviorTransportationShipmentSummary.CompartmentNameIf you have any questions or concerns about this upcoming Anura upgrade, please reach out to Optilogic Support at support@optilogic.com.
When a scenario run is infeasible, it means that the solver cannot find any solutions that satisfy all the specified constraints simultaneously. In other words, the defined constraints are too restrictive or they contradict each other, making it impossible to satisfy all of them at once. Examples are:
If a Neo (network optimization) scenario run is infeasible, a user can tell from within Cosmic Frog and also by looking at the run's logs in the Run Manager application. Within Cosmic Frog, the Optimization Network Summary output table will show Solve Status = Infeasible and many other output columns, such as Solve Type and Solve Time, will be blank:

In the Run Manager application, also on the Optilogic Platform, users can select the scenario run, which will have Status = Error, and look at the run's logs on the right-hand side:

Running the check infeasibility tool is a great first step in identifying why a scenario is infeasible. It can allow the scenario to optimize even if the current constraints cause infeasibility. Generally, the tool will loosen constraints in the scenario, which allow it to solve. This means that the check infeasibility tool is solving an optimization problem focused on feasibility, not cost. The goal of this augmented model is to minimize how much the constraints are loosened, and therefore the tool can often find:
The Check Infeasibility tool is accessed via the model run options on the Run Settings screen:

When running the Check Infeasibility tool in this default manner, all constraints will be relaxed. Additional run parameters can be used to selectively choose which constraints can be relaxed during a Check Infeasibility run and which need to be enforced, see the "Infeasibility Parameters" section in the "Running Models & Scenarios" help center article. Especially in models where many different types of constraints are applied, this can help in determining which constraint(s) cause the infeasibility and how constraints interact with each other. An approach can be to run the Check Infeasibility tool a few times with different constraints enforced and then compare the results. It also lets users indicate to the solver which constraints are most important to satisfy by keeping those enforced, while others are allowed to be relaxed.

Once a Check Infeasibility run completes successfully, the results that are likely most helpful for identifying the infeasibility are found in the Optimization Constraint Summary output table. Users may need to filter for the scenario name in case this table contains results of multiple check infeasibility scenario runs. The following 2 screenshots show the 1 record in this table for the scenario called "Flow Constraint El Bajio 2M" scenario that returns infeasible as shown above. This first screenshot shows that the constraint in question is a Flow constraint that is applied on the lanes from the El Bajio Factory to a group of DCs named DCs Con:

Scrolling right in the table, we can see additional fields which indicate why this constraint leads to infeasibility in the scenario:

In this example, the check infeasibility tool can pinpoint the infeasibility exactly. At other times, the outcomes will be more general, see for example the following screenshots of results of a check infeasibility run on a different scenario:


Here the constraint that cannot be fulfilled is the demand for 1,000 units of Product_2 in Period_6. The output in the Constraint Summary output table is telling us that a way to make this feasible is to change the demand quantity to be 0. We will need to investigate the model inputs further to understand why this is. On further inspection, the infeasibility cause is found to be that the maturation time of Product_2 is set to 50 days. This is longer than the model horizon of 42 days (6 weekly periods). So, if no or not enough initial inventory is present in the model, Product_ 2 cannot be made and mature in time to fulfill demand in Period_ 6.
If the user has turned on the "Print Full Solution During Infeasibility Diagnosis" option, see above, the outputs in the other output tables can possibly also help to determine the cause of the infeasibility.
When a Cosmic Frog model has been built and scenarios of interest have been created, it is usually time to run 1 or multiple scenarios. This documentation covers how scenarios can be kicked off, and which run parameters can be configured by users.
When ready to run scenarios, users can click on the green Run button at the right top of the Cosmic Frog screen:

This will bring up the Run Settings screen:

Please note that:

While we will discuss all parameters for each technology in the next sections of the documentation, please note that you can also find short explanations for each of them within Cosmic Frog: when hovering over a parameter, a tooltip explaining the parameter will be shown. In the next screenshot, the user has expanded the Termination section within the Optimization (Neo) technology parameters section and is hovering with the mouse over the "MIP Relative Gap Percentage" parameter, which brings up the tooltip explaining this parameter:

When selecting one or multiple scenarios to be run with the same engine, the corresponding technology parameters are automatically expanded in the Technology Parameters part of the Run Settings screen:


When enabled, the Check Infeasibility tool will run an infeasibility diagnostic on the model in order to identify any cause(s) of the scenario being infeasible rather than optimizing the scenario for minimal cost. For more information on using the check infeasibility tool, please see this Help Center article.



For more details on the Cost To Serve output tables that are populated by turning the options discussed below on, please see this "Cost to Serve Outputs (Optimization)" Help Center article.


The following screenshot shows where the NEO.lp file can be found when running a scenario with the Write LP File parameter (bullet 2 under the above screenshot) turned on:


As more types of constraints are added to the Neo engine on an ongoing basis, not all may be captured by one of the above "Relax … Constraints" buckets. Any such constraints, like for example shelf life, are always relaxed when running the Infeasibility Check and, if they are the cause of infeasibility, reported in the Optimization Constraint Summary output table.

The following screenshot shows part of a Job Log of a simulation run, see also bullet 3 above:


The inventory engine in Cosmic Frog (called Dendro) uses a genetic algorithm to evaluate possible solutions in a successive manner. The parameters that can be set dictate how deep the search space is and how solutions can evolve from one generation to the next. Using the parameter defaults will suffice for most inventory problems, they are however available to change as needed for experienced users.


As always, please feel free to contact Optilogic Support at support@optilogic.com in case of questions or feedback.
Since Leapfrog's creation, the system has continuously evolved with the addition of new specialized agents. The platform now features a comprehensive agent library built on a robust toolkit framework, enabling sophisticated multi-agent workflows and autonomous task execution.
AI agents are software systems that use a large language model (LLM) as a reasoning engine but go beyond chat by taking actions in an environment. Instead of only generating text, an agent can interpret a goal, decide what to do next, call external capabilities (tools), observe the results, and iterate until the objective is achieved.
In practice, an "agent" is not a single model call - it is a control system wrapped around an LLM:
This architecture matters because it turns the LLM from a passive text generator into an adaptive problem-solver that can:
An agent is not just a chat model. A chat model produces responses; an agent operates - it can run commands, fetch data, write artifacts, and iterate autonomously within defined constraints. Think of an AI agent as a smart assistant that can:
Agents are most useful when tasks are multi-step, partially specified, and feedback-driven, for example:
If a task is single-shot and fully specified (e.g., "summarize this paragraph"), a non-agent LLM call is often simpler and cheaper.
Most agents follow a ReAct-style loop (Reason + Act), sometimes with explicit planning:
A useful way to think about the loop is that each iteration should:
Well-behaved agents stop for explicit reasons, such as:
An agent is the intelligent layer that decides what to do. It's like a project manager who understands the goal, plans the approach, and uses available skills and tools to get the job done.

The Leapfrog ecosystem includes many specialized agents (some of which are shown in the image above), each designed for specific analytical and reporting tasks.
Why specialization helps:
A common pattern is an orchestrator (or "manager") agent that routes work to sub-agents and integrates their outputs into a final deliverable.
The agent toolkit is built on four foundational concepts that enable flexible and powerful agent development:
The core reasoning component - a large language model equipped with specialized skills and capabilities.
In addition to the model itself, an agent definition typically includes:
A versatile building block that packages how to do something. This modularity allows agents to be composed and extended dynamically.
A skill may:
A mechanism for injecting domain-specific expertise into agents at runtime, enabling them to operate effectively in specialized fields without requiring model retraining.
An intelligent storage system that helps agents overcome context-management challenges by preserving important information for future use, enabling continuity across interactions.
Current implementation supports several advanced capabilities enabled by the agent toolkit:
Agents can build structured plans that improve the accuracy and quality of final outputs through systematic decomposition of complex tasks.
The system supports custom tools provided by users, allowing agents to integrate with existing workflows and data infrastructure.
Persistent memory enables agents to maintain context and track important information across extended work sessions.
Complex tasks can be delegated to specialized sub-agents, allowing for efficient division of labor and expertise application.
The system intelligently manages context to ensure agents have access to relevant information while avoiding context window limitations.
Below is a simple workflow showing how different components work together. For simplicity, not all components are included here.

An agent is the intelligent layer that decides what to do. It's like a project manager who understands the goal, plans the approach, and uses available skills and tools to get the job done.
Skills are packaged capabilities that combine one or more tools with guidance on when and how to use them. Think of a skill as a trained procedure or technique.
Tools are the specific actions an AI agent can perform. They are specialized and do one specific thing reliably. They don't make decisions - they just execute when called.
As an AI Agent works, it produces the logs which include steps that the agent takes, tools it calls, as well as a work summary. The AI Response sections are typically the most useful as they explain the exploration plan, the work it has done, and the results after exploration. This is generally a response to the user. While all others are more for internal processes.


The Model Output Insights Agent helps users investigate and analyze Cosmic Frog model outputs by turning analytical questions into structured, data-backed strategic reports. It breaks down complex questions into a step-by-step exploration plan, executes targeted queries, synthesizes findings, and produces a professional report - complete with visualizations and actionable recommendations.

This documentation describes how this specific agent works and can be configured, including walking through an example. Please see the “AI Agents: Architecture and Components” Help Center article if you are interested in understanding how the Optilogic AI Agents work at a detailed level.
Extracting meaningful insights from large databases typically requires exploring and analyzing many output tables which can take a lot of time and effort. The Model Output Insights Agent streamlines the process, helping users get to the insights quicker than ever before.
Main skills the Model Output Insights Agent uses:

Supporting capabilities:

The agent can be accessed through the Run Utility task in DataStar. The key inputs are:

Optionally, users can configure the following Run Utility task inputs:

After the run, a report in markdown format (.md) and possible charts are created and can be found in the Explorer with the specified file name and folder. Once clicked, the file is opened in the Lightning Editor application for review.
Note that currently the charts are only included in the markdown file as a file name. Users can look for the charts in the Charts folder in the targeted report directory:

The Run Utility task also offers the ability for users to set Run Configuration options. This is optional.



This example uses the Global Supply Chain Strategy model from the Resource Library to get insights on Baseline vs. No Detroit DC scenario comparison where cost, flow shifts and service impacts are explored.


Cosmic Frog Model Name: Global Supply Chain Strategy
Analysis Question: Compare cost and flow from Baseline and No Detroit DC scenarios. I'm interested in knowing the cost bucket that drives total savings. I want to know where the flow from Detroit DC was redirected to. Lastly, compare weighted average service distance - i.e. do customers have shorter/longer/the same service distance when Detroit closes down. Who are the top 5 customers with highest service impact?
Knowledge: Info on target audience for the report, expected report length and tone:

Should you wish to read the entire report instructions file and/or use it as a starting point for your own usage with this Agent, you can download it here. After downloading, please rename the .txt extension to .md. You can then upload it to your Optilogic account using the Explorer application and then view it in the Lightning Editor application.
Outputs: The report as a markdown file and a chart in the Charts folder:

DataStar users typically will want to use data from a variety of sources in their projects. This data can be in different locations and systems and there are multiple methods available to get the required data into the DataStar application. In this documentation we will describe the main categories of data sources users may want to use and the possible ways of making these available in DataStar for usage.
If you would first like to learn more about DataStar before diving into data integration specifics, please see the Navigation DataStar articles on the Optilogic Help Center.
The following diagram shows different data sources and the data transfer pathways to make them available for use in DataStar:

We will dive a bit deeper into making local data available for use in DataStar building upon what was covered under bullets 5a-5c in the previous screenshot. First, we will familiarize ourselves with the layout of the Optilogic platform:

Next, we will cover the 3 steps to go from data sitting on a user’s computer locally to being able to use it in DataStar in detail through the next set of screenshots. At a high-level the steps are:
To get local data onto the Opitlogic platform, we can use the file / folder upload option:


Select either the file(s) or folder you want to upload by browsing to it/them. After clicking on Open, the File Upload form will be shown again:

Note that files in the upload list that will not cause name conflicts can also be renamed or removed from the list if so desired. This can for example be convenient when wanting to upload most files in a folder, except for a select few. In that case use the Add Folder option and in the list that will be shown, remove the few that should not be uploaded rather than using Add Files and then manually selecting almost all files in a folder.
Once the files are uploaded, you will be able to see them in the Explorer by expanding the folder they were uploaded to or searching for (part of) their name using the Search box.
The second step is to then make these files visible to DataStar by setting up Data Connections to them:

After setting up a Data Connection to a Cosmic Frog model and to a CSV file, we can see the source files in the Explorer, and the Data Connections pointing to these in DataStar side-by-side:

To start using the data in DataStar, we need to take the third step of importing the data from the data connections into a project. Typically, the data will be imported into the Project Sandbox, but this could also be into another Postgres database, including a Cosmic Frog model. Importing data is done using Import tasks; the Configuration tab of one is shown in this next screenshot:

The 3 steps described above are summarized in the following sequence of screenshots:

For a data workflow that is used repeatedly and needs to be re-run using the latest data regularly, users do not need to go through all 3 steps above of uploading data, creating/re-configuring data connections, and creating/re-configuring Import tasks to refresh local data. If the new files to be used have the same name and same data structure as the current ones, replacing the files on the Optilogic platform with the newer ones will suffice (so only step 1 is needed); the data connections and Import tasks do not need to be updated or re-configured. Users can do this manually or programmatically:
This section describes how to bring external data into DataStar using supported integration patterns where the data transfer is started from an external system, e.g. the data is “pushed” onto the Optilogic platform.
External systems such as ETL tools, automation platforms, or custom scripts can load data into DataStar through the Optilogic Pioneer API (please see the Optilogic REST API documentation for details). This approach is ideal when you want to programmatically upload files, refresh datasets, or orchestrate transformations without connecting directly to the underlying database.
Key points:
Please note that Optilogic has developed a Python library to facilitate scripting for DataStar. If your external system is Python based, you can leverage this library as a wrapper for the API. For more details on working with the library and a code example of accessing a DataStar project’s sandbox, see this “Using the DataStar Python Library” help center article.
Every DataStar project is backed by a PostgreSQL database. You can connect directly to this database using any PostgreSQL-compatible driver, including:
This enables you to write or update data using SQL, query the sandbox tables, or automate recurring loads. The same approach applies to both DataStar projects and Cosmic Frog models since both use PostgreSQL under the hood. Please see this help center article on how to retrieve connection strings for Cosmic Frog model and DataStar project databases; these will need to be passed into the database connection to gain access to the model / project database.
Several scripts and utilities to connect to common external data sources, including Databricks, Google Big Query, Google Drive, and Snowflake, are available on Optilogic’s Resource Library:

These utilities and scripts can function as a starting point to modify into your own desired script for connecting to and retrieving data from a certain data source. You will need to update authentication and connection information in the scripts and configure the user settings to your needs. For example, this is the User Input section of the “Databricks Data Import Script”:

The user needs to update following lines; others can be left at defaults and only updated if desired/required:
For any questions or feedback, please feel free to reach out to the Optilogic support team on support@optilogic.com.
In this quick start guide we will show how Leapfrog AI can be used in DataStar to generate tasks from natural language prompts, no coding necessary!
This quick start guide builds upon the previous one where a CSV file was imported into the Project Sandbox, please follow the steps in there first if you want to follow along with the steps in this quick start. The starting point for this quick start is therefore a project named Import Historical Shipments that has a Historical Shipments data connection of type = CSV, and a table in the Project Sandbox named rawshipments, which contains 42,656 records.
The Shipments.csv file that was imported into the rawshipments table has following data structure (showing 5 of the 42,656 records):

Our goal in this quick start is to create a task using Leapfrog that will use this data (from the rawshipments table in the Project Sandbox) to create a list of unique customers, where the destination stores function as the customers. Ultimately, this list of customers will be used to populate the Customers input table of a Cosmic Frog model. A few things to consider when formulating the prompt are:
Within the Import Historical Shipments DataStar project, click on the Import Shipments macro to open it in the macro canvas, you should see the Start and Import Raw Shipments tasks on the canvas. Then open Leapfrog by clicking on the Ask Leapfrog AI button to the right in the toolbar at the top of DataStar. This will open the Leapfrog tab where a welcome message will be shown. Next, we can write our prompt in the “Write a message…” textbox.

Keeping in mind the 5 items mentioned above, the prompt we use is the following: “Use the @rawshipments table to create unique customers (use the @rawshipments.destination_store column); average the latitudes and longitudes. Only use records with the @rawshipments.ship_date between July 1 2024 and June 30 2025. Match to the anura schema of the Customers table”. Please note that:
After clicking on the send icon to submit the prompt, Leapfrog will take a few seconds to consider the prompt and formulate a response. The response will look similar to the following screenshot, where we see from top to bottom:

For copy-pasting purposes, the resulting SQL Script is repeated here:
DROP TABLE IF EXISTS customers;
CREATE TABLE customers AS
SELECT
destination_store AS customername,
AVG(destination_latitude) AS latitude,
AVG(destination_longitude) AS longitude
FROM rawshipments
WHERE
TO_DATE(ship_date, 'DD/MM/YYYY') >= '2024-07-01'::DATE
AND TO_DATE(ship_date, 'DD/MM/YYYY') <= '2025-06-30'::DATE
GROUP BY destination_store;
Those who are familiar with SQL, will be able to tell that this will indeed achieve our goal. Since that is the case, we can click on the Add to Macro button at the bottom of Leapfrog’s response to add this as a Run SQL task to our Import Shipments macro. When hovering over this button, you will see Leapfrog suggests where to put it on the macro canvas and to connect it to the Import Raw Shipments task, which is what we want. When next clicking on the Add to Macro button it will be added.

We can test our macro so far, by clicking on the green Run button at the right top of DataStar. Please note that:
Once the macro is done running, we can check the results. Go to the Data Connections tab, expand the Project Sandbox connection and click on the customers table to open it in the central part of DataStar:

We see that the customers table resulting from running the Leapfrog-created Run SQL task contains 1,333 records. Also notice that its schema matches that of the Customers table of Cosmic Frog models, which includes columns named customername, latitude, and longitude.
Writing prompts for Leapfrog that will create successful responses (e.g. the SQL Script generated will achieve what the prompt-writer intended) may take a bit of practice. This Mastering Leapfrog for SQL Use Cases: How to write Prompts that get Results post on the Frogger Pond community portal has some great advice which applies to Leapfrog in DataStar too. It is highly recommended to give it a read; the main points of advice follow here too:
As an example, let us look at variations of the prompt we used in this quick start guide, to gauge the level of granularity needed for a successful response. In this table, the prompts are listed from least to most granular:
Note that in the above prompts, we are quite precise about table and column names and no typos are made by the prompt writer. However, Leapfrog can generally manage well with typos and often also pick up table and column names when not explicitly used in the prompt. So while generally being more explicit results in higher accuracy, it is not necessary to always be extremely explicit and we just recommend to be as explicit as you can be.
In addition, these example prompts do not use the @ character to specify tables and columns to use, but they could to facilitate prompt writing further.
In this quick start guide we will walk through the steps of exporting data from a table in the Project Sandbox to a table in a Cosmic Frog model.
This quick start guide builds upon a previous one where unique customers were created from historical shipments using a Leapfrog-generated Run SQL task. Please follow the steps in that quick start guide first if you want to follow along with the steps in this one. The starting point for this quick start is therefore a project named Import Historical Shipments, which contains a macro called Import Shipments. This macro has an Import task and a Run SQL task. The project has a Historical Shipments data connection of type = CSV, and the Project Sandbox contains 2 tables named rawshipments (42,656 records) and customers (1,333 records).
The steps we will walk through in this quick start guide are:
First, we will create a new Cosmic Frog model which does not have any data in it. We want to use this model to receive the data we export from the Project Sandbox.
As shown with the numbered steps in the screenshot below: while on the start page of Cosmic Frog, click on the Create Model button at the top of the screen. In the Create Frog Model form that comes up, type the model name, optionally add a description, and select the Empty Model option. Click on the Create Model button to complete the creation of the new model:

Next, we want to create a connection to the just created empty Cosmic Frog model in DataStar. To do so: open your DataStar application, then click on the Create Data Connection button at the top of the screen. In the Create Data Connection form that comes up, type the name of the connection (we are using the same name as the model, i.e. “Empty CF Model for DataStar Export”),optionally add a description, select Cosmic Frog Models in the Connection Type drop-down list, click on the name of the newly created empty model in the list of models, and click on Add Connection. The new data connection will now be shown in the list of connections on the Data Connections tab (shown in list format here):

Now, go to the Projects tab, and click on the “Import Historical Shipments” project to open it. We will first have a look at the Project Sandbox and the empty Cosmic Frog model connections, so click on the Data Connections tab:

The next step is to add and configure an Export Task to the Import Shipments macro. Click on the Macros tab in the panel on the left-hand side, and then on the Import Shipments macro to open it. Click on the Export task in the Tasks panel on the right-hand side and drag it onto the Macro Canvas. If you drag it close to the Run SQL task, it will automatically connect to it once you drop the Export task:

The Configuration panel on the right has now become the active panel:

Click on the AutoMap button, and in the message that comes up, select either Replace Mappings or Add New Mappings. Since we have not mapped anything yet, the result will be the same in this case. After using the AutoMap option, the mapping looks as follows:

We see that each source column is now mapped to a destination column of the same name. This is what we expect, since in the previous quick start guide, we made sure to tell Leapfrog when generating the Run SQL task for creating unique customers to match the schema of the customers table in Cosmic Frog models (“the Anura schema”).
If the Import Shipments macro has been run previously, we can just run the new Export Customers task by itself (hover over the task in the Macro Canvas and click on the play button that comes up), otherwise we can choose to run the full macro by clicking on the green Run button at the right top. Once completed, click on the Data Connections tab to check the results:

Above, the AutoMap functionality was used to map all 3 source columns to the correct destination columns. Here, we will go into some more detail on manually mapping and additional options users have to quickly sort and filter the list of mappings.

In this quick start guide we will walk through the steps of modifying data in a table in the Project Sandbox using Update tasks. These changes can either be made to all records in a table or a subset based on a filtering condition. Any PostgreSQL function can be used when configuring the update statements and conditions of Update tasks.
This quick start guide builds upon a previous one where unique customers were created from historical shipments using a Leapfrog-generated Run SQL task. Please follow the steps in that quick start guide first if you want to follow along with the steps in this one. The starting point for this quick start is therefore a project named “Import Historical Shipments”, which contains a macro called Import Shipments. This macro has an Import task and a Run SQL task. The project has a Historical Shipments data connection of type = CSV, and the Project Sandbox contains 2 tables named rawshipments (42,656 records) and customers (1,333 records). Note that if you also followed one of the other quick start guides on exporting data to a Cosmic Frog model (see here), your project will also contain an Export task, and a Cosmic Frog data connection; you can still follow along with this quick start guide too.
The steps we will walk through in this quick start guide are:
We have a look at the customers table which was created from the historical shipment data in the previous 2 quick start guides, see the screenshot below. Sorting on the customername column, we see that they are ordered in alphabetical order. This is because the customer name column is of type text as it starts with the string “CZ”. This leads to them not being ordered based on the number part that follows the “CZ” prefix.

If we want ordering customer names alphabetically to result in an order that is the same as sorting the number part of the customer name, we need to make sure each customer name has the same number of digits. We will use Update tasks to change the format of the number part of the customer names so that they are all 4 digits by adding leading 0’s to those that have less than 4 digits. While we are at it, we will also replace the “CZ” prefix with “Cust_” to make the data consistent with other data sources that contain customer names. We will break the updates to the customer name column up into 3 steps using 3 Update tasks initially. At the end, we will see how they can be combined into a single Update task. The 3 steps are:
Let us add the first Update task to our Import Shipments macro:

After dropping the Update task onto the macro canvas, its configuration tab will be opened automatically on the right-hand side:

If you have not already, click on the plus button to add your first update statement:

Next, we will write the expression for which we can use the Expression Builder area just below the update statements table. What we type there will also be added to the Expression column of the selected Update Statement. These expressions can use any PostgreSQL function, also those which may not be pre-populated in the helper lists. Please see the PostgreSQL documentation for all available functions.

When clicking in the Expression Builder, an equal sign is already there, and a list of items comes up. At the top are the columns that are present in the target table and below those is a list of string functions which we can select to use. Here, the functions shown are string functions, since we are working on a text type column, when working on column with a different data type, other functions, those relevant to the data type, will be shown. We will select the last option shown in the screenshot, the substring function, since we want to first remove the “CZ” from the start of the customer names:

The substring function needs at least 2 arguments, which will be specified in the parentheses. The first argument needs to be the customername column in our case, since that is the column containing the string values we want manipulate. After typing a “c”, the customername column and 2 functions starting with “c” are suggested in the pop-up list. We choose the customername column. The second argument specifies the start location from where we want to start the substring. Since we want to remove the “CZ”, we specify 3 as the start location, leaving characters number 1 and 2 off. The third argument is optional; it indicates the end location of the substring. We do not specify it, meaning we want to keep all characters starting from character number 3:

We can run this task now without specifying a Condition (see section further below) in which case the expression will be applied to all records in the customers table. After running the task, we open the customers table to see the result:

We see that our intended change was made. The “CZ” is removed from the customer names. Sorted alphabetically, they still are not in increasing order of the number part of their name. Next, we use the lpad (left pad) function to add leading zeroes so all customer names consist of 4 digits. This function has 3 arguments: the string to apply the left padding to (the customername column), the number of characters the final string needs to have (4), and the padding character (‘0’).

After running this task, the customername column values are as follows:

Now with the leading zeroes and all customer names being 4 characters long, sorting alphabetically results in the same order as sorting by the number part of the customer name.
Finally, we want to add the prefix “Cust_”. We use the concat (concatenation) function for this. At first, we type Cust_ with double quotes around it, but the squiggly red line below the expression in the expression builder indicates this is not the right syntax. Hovering over the expression in the expression builder explains the problem:

The correct syntax for using strings in these functions is to use single quotes:

Instead of concat we can also use “= ‘Cust_’ || customername” as the expression. The double pipe symbol is used in PostgreSQL as the concatenation operator.
Running this third update task results in the following customer names in the customers table:

Our goal of how we wanted to update the customername column has been achieved. Our macro now looks as follows with the 3 Update tasks added:

The 3 tasks described above can be combined into 1 Update task by nesting the expressions as follows:

Running this task instead of the 3 above will result in the same changes to the customername column in the customers table.
Please note that in the above we only specified one update statement in each Update task. You can add more than one update statement per update task, in which case:
As mentioned above, the list of suggested functions is different depending on the data type of the column being updated. This screenshot shows part of the suggested functions for a number column:

At the bottom of Expression Builder are multiple helper tabs to facilitate quickly building your desired expressions. The first one is the Function Helper which lists the available functions. The functions are listed by category: string, numeric, date, aggregate, and conditional. At the top of the list user has search, filter and sort options available to quickly find a function of interest. Hovering over a function in the list will bring up details of the function, from top to bottom: a summary of the format and input and output data types of the function, a description of what the function does, its input parameter(s), what it returns, and an example:

The next helper tab contains the Field Helper. This lists all the columns of the target table, sorted by their data type. Again, to quickly find the desired field, users can search, filter, and sort the list using the options at the top of the list:

The fourth tab is the Operator Helper, which lists several helpful numerical and string operators. This list can be searched too using the Search box at the top of the list:

There is another optional configuration section for Update tasks, the Condition section. In here, users can specify an expression to filter the target table on before applying the update(s) specified in the Update Statements section. This way, the updates are only applied to the subset of records that match the condition.
In this example, we will look at some records of the rawshipments table in the project sandbox of the same project (“Import Historical Shipments). We have opened this table in a grid and filtered for origin_dc Salt Lake City DC and destination_store CZ103.

What we want to do is update the “units” column and increase the values by 50% for the Table product. The Update Statements section shows that we set the units field to its current value multiplied by 1.5, which will achieve the 50% increase:

However, if we run the Update task as is, all values in the units field will be increased by 50%, for both the Table and the Chair product. To make sure we only apply this increase to the Table product, we configure the Condition section as follows:

The condition builder has the same function, field, and operator helper tabs at the bottom as the expression builder in the update statements section to enable users to quickly build their conditions. Building conditions works in the same way as building expressions.
Running the task and checking the updated rawshipments table for the same subset of records as we saw above, we can check that it worked as intended. The values in the units column for the Table records are indeed 1.5 times their original value, while the Chair units are unchanged.

It is important to note that opening tables in DataStar currently shows a preview of 10,000 records. When filtering a table by clicking on the filter icons to the right of a column name, only the resulting subset of records from those first 10,000 records will be included. While an Update task will be applied to all records in a table, due to this limit on the number of records in the preview you may not always be able to see (all) results of your Update task in the grid. In addition, an Update task can also change the order of the records in the table. This can lead to a filter showing a different set of records after running an update task as compared to the filtered subset that was shown prior to running it. Users can use the SQL Editor application on the Optilogic platform to see the full set of records for any tables.
Finally, if you want to apply multiple conditions you can use logical AND and OR statements to combine them in the Expression Builder. You would for example specify the condition as follows if you want to increase the units for the Table product by 50% only for the records where the origin_dc value is either “Dallas DC” or “Detroit DC”:

In this quick start guide we will show how users can seamlessly go from using the Resource Library, Cosmic Frog and DataStar applications on the Optilogic platform to creating visualizations in Power BI. The example covers cost to serve analysis using a global sourcing model. We will run 2 scenarios in this Cosmic Frog model with the goal to visualize the total cost difference between the scenarios by customer on a map. We do this by coloring the customers based on the cost difference.
The steps we will walk through are:
We will first copy the model named “Global Sourcing – Cost to Serve” from the Resource Library to our Optilogic account (learn more about the Resource Library in this help center article):

On the Optilogic platform, go to the Resource Library application by clicking on its icon in the list of applications on the left-hand side; note that you may need to scroll down. Should you not see the Resource Library icon here, then click on the icon with 3 horizontal dots which will then show all applications that were previously hidden too.
Now that the model is in the user’s account, it can be opened in the Cosmic Frog application:


We will only have a brief look at some high-level outputs in Cosmic Frog in this quick start guide, but feel free to explore additional outputs. You can learn more about Cosmic Frog through these help center articles. Let us have a quick look at the Optimization Network Summary output table and the map:


Our next step is to import the needed input table and output table of the Global Sourcing – Cost to Serve model into DataStar. Open the DataStar application on the Optilogic platform by clicking on its icon in the applications list on the left-hand side. In DataStar, we first create a new project named “Cost to Serve Analysis” and set up a data connection to the Global Sourcing – Cost to Serve model, which we will call “Global Sourcing C2S CF Model”. See the Creating Projects & Data Connections section in the Getting Started with DataStar help center article on how to create projects and data connections. Then, we want to create a macro which will calculate the increase/decrease in total cost by customer between the 2 scenarios. We build this macro as follows:

The configuration of the first import task, C2S Path Summary, is shown in this screenshot:

The configuration of the other import task, Customers, uses the same Source Data Connection, but instead of the optimizationcosttoservepathsummary table, we choose the customers table as the table to import. Again, the Project Sandbox is the Destination Data Connection, and the new table is simply called customers.
Instead of writing SQL queries ourselves to pivot the data in the cost to serve path summary table to create a new table where for each customer there is a row which has the customer name and the total cost for each scenario, we can use Leapfrog to do it for us. See the Leapfrog section in the Getting Started with DataStar help center article and this quick start guide on using natural language to create DataStar tasks to learn more about using Leapfrog in DataStar effectively. For the Pivot Total Cost by Scenario by Customer task, the 2 Leapfrog prompts that were used to create the task are shown in the following screenshot:

The SQL Script reads:
DROP TABLE IF EXISTS total_cost_by_customer_combined;
CREATE TABLE total_cost_by_customer_combined AS
SELECT
pathdestination AS customer,
SUM(CASE WHEN scenarioname = 'Baseline' THEN pathcost ELSE 0 END)
AS total_cost_baseline,
SUM(CASE WHEN scenarioname = 'OpenPotentialFacilities' THEN pathcost ELSE 0 END)
AS total_cost_openpotentialfacilities
FROM c2s_path_summary
WHERE scenarioname IN ('Baseline', 'OpenPotentialFacilities')
GROUP BY pathdestination
ORDER BY pathdestination;
To create the Calculate Cost Savings by Customer task, we gave Leapfrog the following prompt: “Use the total cost by customer table and add a column to calculate cost savings as the baseline cost minus the openpotentalfacilities cost”. The resulting SQL Script reads as follows:
ALTER TABLE total_cost_by_customer_combined
ADD COLUMN cost_savings DOUBLE PRECISION;
UPDATE total_cost_by_customer_combined
SET
cost_savings = total_cost_baseline - total_cost_openpotentialfacilities;
This task is also added to the macro; its name is "Calculate Cost Savings by Customer".
Lastly, we give Leapfrog the following prompt to join the table with cost savings (total_cost_by_customer_combined) and the customers table to add the coordinates from the customers table to the cost savings table: “Join the customers and total_cost_by_customer_combined tables on customer and add the latitude and longitude columns from the customers table to the total_cost_by_customer_combined table. Use an inner join and do not create a new table, add the columns to the existing total_cost_by_customer_combined table”. This is the resulting SQL Script, which was added to the macro as the "Add Coordinates to Cost Savings" task:
ALTER TABLE total_cost_by_customer_combined ADD COLUMN latitude VARCHAR;
ALTER TABLE total_cost_by_customer_combined ADD COLUMN longitude VARCHAR;
UPDATE total_cost_by_customer_combined SET latitude = c.latitude
FROM customers AS c
WHERE total_cost_by_customer_combined.customer = c.customername;
UPDATE total_cost_by_customer_combined SET longitude = c.longitude
FROM customers AS c
WHERE total_cost_by_customer_combined.customer = c.customername;We can now run the macro, and once it is completed, we take a look at the tables present in the Project Sandbox:

We will use Microsoft Power BI to visualize the change in total cost between the 2 scenarios by customer on a map. To do so, we first need to set up a connection to the DataStar project sandbox from within Power BI. Please follow the steps in the “Connecting to Optilogic with Microsoft Power BI” help center article to create this connection. Here we will just show the step to get the connection information for the DataStar Project Sandbox, which underneath is a PostgreSQL database (next screenshot) and selecting the table(s) to use in Power BI on the Navigator screen (screenshot after this one):

After selecting the connection within Power BI and providing the credentials again, on the Navigator screen, choose to use just the total_cost_by_customer_combined table as this one has all the information needed for the visualization:

We will set up the visualization on a map using the total_cost_by_customer_combined table that we have just selected for use in Power BI using the following steps:
With the above configuration, the map will look as follows:

Green customers are those where the total cost went down in the OpenPotentialFacilities scenario, i.e. there are savings for this customer. The darker the green, the higher the savings. White customers did not see a lot of difference in their total costs between the 2 scenarios. The one that is hovered over, in Marysville in Washington state, has a small increase of $149.71 in total costs in the OpenPotentialFacilities scenario as compared to the Baseline scenario. Red customers are those where the total cost went up in the OpenPotentialFacilities scenario (i.e. the cost savings are a negative number); the darker the red, the higher the increase in total costs. As expected, the customers with the highest cost savings (darkest green) are those located in Texas and Florida, as they are now being served from DCs closer to them.
To give users an idea of what type of visualization and interactivity is possible within Power BI, we will briefly cover the 2 following screenshots. These are of a different Cosmic Frog model for which a cost to serve analysis is performed too. Two scenarios were run in this model: Baseline DC and Blue Sky DC. In the Baseline scenario, customers are assigned to their current DCs and in the Blue Sky scenario, they can be re-assigned to other DCs. The chart on the top left shows the cost savings by region (= US state) that are identified in the Blue Sky DC scenario. The other visualizations on the dashboard are all on maps: the top right map shows the customers which are colored based on which DC serves them in the Baseline scenario, the bottom 2 maps shows the DCs used in the Baseline (left) and the DCs used in the Blue Sky scenario.

To drill into the differences between the 2 scenarios, users can expand the regions in the top left chart and select 1 or multiple individual customers. This is an interactive chart, and the 3 maps are then automatically filtered for the selected location(s). In the below screenshot, the user has expanded the NC region and then selected customer CZ_593_NC in the top left chart. In this chart, we see that the cost savings for this customer in the Blue Sky DC scenario as compared to the Baseline scenario amount to $309k. From the Customers map (top right) and Baseline DC map (bottom left) we see that this customer was served from the Chicago DC in the Baseline. We can tell from the Blue Sky DC map (bottom right) that this customer is re-assigned to be served from the Philadelphia DC in the Blue Sky DC scenario.
