The following instructions show how to establish a local connection, using Alteryx, to an Optilogic model that resides within our platform. These instructions will show you how to:
Watch the video for an overview of the connection process:
A step by step set of instructions can also be downloaded in the slide deck here: CosmicFrog-Alteryx-Connection-Instructions
To make a local connection you must first open a Firewall connection between your current IP address and the Optilogic platform. Navigate to the Cloud Storage app – note that the app selection found on the left-hand side of the screen might need to be expanded. Check to see if your current IP address is authorized and if not, add a rule to authorize this IP address. You can optionally set an expiration date for this authorization.

If you are working from a new IP Address, a banner notification should be displayed to let you know that the new IP Address will need to be authorized.
From the Databases section of the Cloud Storage page, click on the database that you want to connect to. Then, click on the Connection Strings button to display all of the required connection information.

We have connection information for the following formats:
To select the format of your connection information, use the drop-down menu labeled Select Connection String:

For this example, we will copy and paste the strings for the ‘PSQL’ connection. The screen should look something like the following:

You can click on any of the parameters to copy them to your clipboard, and then paste them into the relevant field when establishing the PSQL ODBC connection.
Many tools, including Alteryx, use Open Database Connectivity (ODBC) to enable a connection to the Cosmic Frog model database. To access the Cosmic Frog model, you will need to download and install the relevant ODBC drivers. Latest versions of the drivers are located here: https://www.postgresql.org/ftp/odbc/releases/
From here, click on the latest parent folder, which as of June 20, 2024 will be REL-16_00_0005. Select and download the psqlodbc_x64.msi file.
When installing, use the default settings from the installation wizard.
At this point we have the pieces to make a connection in Alteryx. Open Alteryx and start a new Workflow. Drag the Input Data action into the Workflow and click to “Connect a File or Database.”

Select “Data sources” and scroll down to select “PostgresSQL ODBC”


On the next screen click “ODBC Admin” to setup the connection.

Click “Add” to create a new connection and then select “PostgreSQL ANSI(x64)” then click “Finish.”

Now we need to configure the connection with the information we gathered from the connection strings.

“Data Source” and “Description” allow you to name the connection, these can be named whatever you wish.
Copy the values for “Server”, “Database”, “User Name”, “Password” and “Port” from the connection string information copied from Optilogic Cloud Storage (see above).
DON’T FORGET to select “require” in “SSL Mode”
You may click “Test” to confirm the connection works or click “Save.”
Now select the new connection, in this example “Alteryx Demo Model” and click “OK”

Now we need to select the same Data Source that we just built in ODBC within Alteryx. We need to enter the username and password for the connection for Alteryx authentication. These are the same credentials used to setup the ODBC connection. Remember to use your specific model’s credentials from the Connection String in the Optilogic platform Cloud Storage page.

Depending on your organization’s security protocols, one additional step might need to be taken to whitelist Optilogic’s Postgres SQL Server. This can be done by whitelisting the host URL (*.postgres.database.azure.com) and the port (6432). If you are unsure how to whitelist the server or do not have the necessary permissions, please contact your IT department or network administrator for assistance.
11/16/2023 – There is an issue connecting through ODBC with the latest version of Alteryx. While we await a fix in an updated version of Alteryx, you can still connect with an older version of Alteryx (2021.4.2.47884)
05/01/2024 – Alteryx has resolved the ODBC connection issue with their latest major version release of 2024.1. If your currently installed Alteryx version is not working as intended, please upgrade to latest.
An alternative workaround is to disable the AMP engine on your Alteryx workflow. For any workflows that use the ODBC connection to a database hosted on the platform, you can uncheck the option in the Workflow Configuration for ‘Use AMP Engine’. The Workflow Configuration window will display on the left side of your screen if you click on the whitespace anywhere in your workflow.

Cosmic Frog users can now perform additional quick analyses on their supply chain models’ input and output data through Cosmic Frog’s new grid features. This functionality enables users to easily apply different types of grouping and aggregation to their data, while also allowing users to view their data in a pivoted format. Think for example of the following use cases:
In this documentation we will cover how grids can be configured to use these new features, show several additional examples, and conclude with a few pointers for effective use of these features.
These new grid features can be accessed from the “Columns” section on the side bar on the right-hand side of input and output tables while in the Data module of Cosmic Frog:

Alternatively, users can also start grouping and subsequently aggregating by right clicking on the column names in the table grid:

We will first cover Row Grouping, then Aggregated Table Mode, and finally Pivot Mode.
Using the row grouping functionality allows users to select 1 column in an input or output table by which all the records in the table will be grouped. These groups of records can be collapsed and expanded as desired to review the data. In the following screenshot the row grouping feature is used to compare the sources of a certain finished good in a particular period for 1 scenario:

When clicking on Columns on the right hand-side of the table to open the row grouping / aggregated table / pivot grid configuration pane shows the configuration for this row grouping:

Once a table is grouped by a field, a next step can be to aggregate one or multiple columns by this grouped field. When this is done, we call this aggregated table mode. Different types of aggregation are available to the user, which will be discussed in this section.
When configuring the grid through the configuration panel that comes up when clicking on Columns on the right-hand side of input & output tables, several options are available to help users find field names quickly and turn multiple on/off simultaneously:

To configure the grid, fields can be dragged and dropped:

Alternatively, instead of dragging and dropping, user can also right-click on the field(s) of interest to add them to the configuration areas. This can be done both in the list with column names at the top of the configuration window as shown in the following screenshot, but also on the column names in the grid itself (which we have seen an example of in the “How to Access the New Grid Features” section above):

In the screenshot above (taken with Pivot Mode on which is why the Column Labels area is also visible), user right-clicked on the Flow Volume field and now user can choose to add it to the Row Groups area (“Group by FlowVolume”), to the ∑ Values area (“Add FlowVolume to values”), or to the Column Labels area (“Add FlowVolume to labels”).
The next screenshot shows the result of a configured aggregated table grid:

When adding numeric fields to the ∑ Values area, the following aggregation options are available to the user:

For non-numeric fields, only the last 3 options are available as aggregations:

When adding an aggregation field through right-clicking on a field name in the grid, it looks as follows. User right-clicked on a numerical field, Transportation Cost, here:

When filters are applied to the table, these are still applied when the table is being grouped by rows, aggregated, or pivoted:

It was mentioned above that the number in parentheses after the scenario name represents the number of rows that the aggregation was applied to. We can expand this by clicking on the greater than (>) icon to view the individual rows that make up the aggregation:

When users turn on pivot mode, an extra configuration area named Column Labels becomes available in addition to the Row Groups and ∑ Values areas:
Another example to show the total volumes of different flow types, filtered for finished goods, by scenario is shown in the next screenshot:

So far, we have only looked at using the new grid features on the Optimization Flow Summary output table. Here, we will show some additional examples on different input and output tables.
In this first additional example, a pivot grid is configured to show the total production quantity for each facility by scenario:

In the next example, we will show how to configure a pivot grid to do a quick check on the shipment quantities: how much the backhaul vs linehaul quantity is and how much of each is set to Include vs Exclude:

In the following 2 examples, we are doing some quick analysis on the Simulation Inventory On Hand Report, a simulation (Throg) output table containing granular details on the inventory levels by location and product over time. In the first of these 2 examples, we want to see the average inventory by location and product for a specific scenario:

In the next example, we want to see how often products stock out at the different facilities in the Baseline scenario:

The last 2 examples in this section show 2 different views of Greenfield (Triad) outputs. The first example shows the average transport distance and time by scenario:

Lastly, we want to look, by scenario, how much of the quantity delivered to customers falls in the 300 miles, 500 miles, and 750 miles service bands:

Please take note of following to make working with Row Grouping, Aggregated Table Grids and Pivot Grids as effective as possible:


Watch the video to learn how to create and manage models in your workspace:
The Anura data model contains 11 categories of tables. The most basic of models can be built by only populated six tables (Customers, Facilities, Products, Customer Demand, Production Policies, and Transportation Policies); however, most models require a few tables from each modeling category to build out a realistic interpretation of the supply chain.
Tables are grouped into the following sections:

The following instructions show how to establish a local connection, using Azure Data Studio, to an Optiogic model that resides in the platform. These instructions will show you how to:
Watch the video for an overview of the connection process:
To make a local connection you must first open a Firewall connection between your current IP address and the Optilogic platform. Navigate to the Cloud Storage app – note that the app selection found on the left-hand side of the screen might need to be expanded. Check to see if your current IP address is authorized and if not, add a rule to authorize this IP address. You can optionally set an expiration date for this authorization.

If you are working from a new IP Address, a banner notification should be displayed to let you know that the new IP Address will need to be authorized.
From the Databases section of the Cloud Storage page, click on the database that you want to connect to. Then, click on the Connection Strings button to display all of the required connection information.

We have connection information for the following formats:
To select the format of your connection information, use the drop-down menu labeled Select Connection String:

For this example, we will copy and paste the strings for the ‘PSQL’ connection. The screen should look something like the following:

You can click on any of the parameters to copy them to your clipboard, and then paste them into the relevant field in Azure Data Studio when establishing the PSQL connection.
Within Azure Data Studio click the “Extensions” button and type in “postgres” in the search box to find and install the PostgreSQL extension.

Add a new connection in Azure Data Studio, change the connection type to “PostgreSQL, “and enter the arguments for “PSQL” from the Cloud Storage page. NOTE: you will need to click “Advanced” to type in the Port and to change the SSL mode to “require.”



Depending on your organization’s security protocols, one additional step might need to be taken to whitelist Optilogic’s Postgres SQL Server. This can be done by whitelisting the host URL (*.database.optilogic.app) and the port (6432). If you are unsure how to whitelist the server or do not have the necessary permissions, please contact your IT department or network administrator for assistance.
With the 2.8 version of Anura there are quite a few new tables and columns being added, along with a small number of existing columns that have been renamed. These updates enable new features including, but not limited to, the following:
We will cover the 2 instances of breaking changes below, followed by a more detailed review of schema adjustments specific to each solver. If you have any questions regarding these updates, please do not hesitate to reach out to support@optilogic.com for further information.
2.7_TransportationRates
The Transportation Rates table previously used by NEO has been renamed to Transportation Band Costing in 2.8. This is done to allow for a Hopper-specific table to be built out and take the name of Transportation Rates. All data upgrades will be processed automatically, but any ETL workflows that targeted the Transportation Rates table in 2.7 will need to be updated.
2.7_OptimizationCostToServeParentInformationReport
Optimization Cost To Serve Parent Information Report has had the CurrentNodeName and ParentNodeName renamed to CurrentSiteName and ParentSiteName. All upgrades will be handled automatically, but any dashboards or external data visualizations that target these columns would need to be updated.
Watch the video to learn how to connect your data tool of choice directly to your Optilogic model:
Transportation policies describe how material flows throughout a supply chain. In Cosmic Frog, we can define our transportation policies using the Transportation Policies (required) and Transportation Modes (optional) tables. The Transportation Policies table will be covered in this documentation. In general, we can have a unique transportation policy for each combination of origin, destination, product, and transport mode.

Typically in simulation models, transportation policies are defined over the group of all products (which can be done by leaving Product Name blank as is done in the screenshot above), unless some products need to be prevented from being combined into shipments together on the same mode. If Transportation Policies list products explicitly, these products will not be combined in shipments.
Here, we will first cover the available transportation policies; other transportation characteristics that can be specified in the Transportation Policies table will be discussed in the sections after.
Currently supported transportation simulation policies are:
Selecting “On Volume”, “On Weight”, or “On Quantity” as a simulation policy means that either the volume, weight, or quantity of the shipment will determine which transportation mode is selected. In this case, the “Simulation Policy Value” defines the lowest volume that will go by that mode. We can use multiple lines to define multiple breakpoints for this policy.

Please note that:
If “By Preference” is selected, we can provide a ranking describing which transportation mode we want to use for different origin-destination-product combinations. We can describe our preference using the “Simulation Policy Value” column.

This screenshot shows that all MFG to DC transportation lanes only have 1 Mode of Container and the Simulation Policy is set to By Preference for all of them. If there are multiple Modes available, the By Preference policy will select them pending availability in the order of preference specified by the Simulation Policy Value field, the lowest value being the most preferred mode. If there were 2 modes available and the policy set to By Preference, where 1 mode has a simulation policy value of 1 and the other of 2, the Mode with simulation policy value = 1 will be used if available, if it is not available, the mode with simulation policy value = 2 will be used.
In the following example, the “Container” mode is preferred over the “Truck” mode for the MFG_CA to DC_IL route. Note that since the “Product Name” column is left blank, this policy applies to all products using this route.

Selecting “By Due Date” is like “By Preference” in that different modes can be ranked via the “Simulation Policy Value”. However, selecting “By Due Date” adds the additional component of demand timing into its selection. This policy selects the highest preference option that can meet the due date of the shipment. The following screenshot shows that the By Due Date simulation policy is used on certain DC to CZ lanes where 2 Modes are used, Truck and Parcel:

Costs associated with transportation can be entered in the Transportation Policies table Fixed Cost and Unit Cost fields. Additionally, the distance and time travelled using a certain Mode can be specified too:

Maximum flow on Lanes (origin-destination-product combinations) and/or Modes (origin-destination-product-mode combinations) can also be specified in the Transportation Policies table:

The Lane Capacity field and its UOM field specify the maximum flow on the Lane, while the Lane Capacity Period and its UOM field are used to indicate over what period of time this capacity applies. In this example, the MFG_CA to DC_AZ lane (first record) has a maximum capacity of 30 shipments every 13 weeks. Once 30 shipments have been shipped on this lane in a 13 week period, this lane cannot be used anymore during those 13 weeks; it is available for shipping again from the first day of the next 13 week period. If a lane’s capacity is reached, it depends on the simulation logic set up what happens. It can for example lead to the simulation making different sourcing decisions: if By Preference sourcing is used and the lane capacity on the lane of the preferred source to the destination has been reached for the period, this source is not considered available anymore and the next preferred source will be checked for availability, etc.
Analogous to the 4 fields to set Lane Capacity shown and discussed above, there are also 4 fields in the Transportation Policies table to set the Lane Mode Capacity where the capacity is specifically applied to a mode and not the whole lane in case multiple Modes exist on the lane: Lane Mode Capacity and its UOM field, and Lane Mode Capacity Period and its UOM field.
There are a few other fields on the Transportation Policies table that the Throg simulation engine will take into account if populated:
If you are running into issues loading Atlas or Cosmic Frog initially where the loading spinner is stuck on the screen you can attempt to perform a hard refresh of the browser. This spinner being stuck can be caused by the website loading a stale token and refreshing the page without loading from cache should generate a fresh token.

To perform a hard refresh of the browser hit CTRL + F5. Alternatively, you can hit the refresh button in the browser while holding down CTRL.
If the issue persists, please reach out to support@optilogic.com.
Watch the video to learn how to create, manage and hyperscale scenarios:
Model constraints allow us to capture real-world limitations in our supply chain. These limitations might include production capacity, transportation restrictions, storage limits, etc.
By default, Cosmic Frog allows us to incorporate the following constraint types for Neo optimization models:
Each set of constraints has its own table in the Cosmic Frog data schema.

Flow Constraints introduce a limit on the amount of flow (product) that can pass between an origin/destination pair in our supply chain.
In the example below, we limit the number of pencils that the Ashland production site can ship to the Chicago customer to a maximum of 20 per period.
Note that we can adjust the unit of measure of our constraints as needed. For example, we might be interested in limiting the total weight of product shipped between locations, instead of the total count of product.

Inventory constraints limit the amount of product that can be stored at a facility. They can represent both minimum and maximum inventory limits.
In the example below we limit the capacity of the Bakersfield distribution center such that it can hold at most 500 pens. We are also requiring that the DC stores at least 100 pencils.

Production constraints limit the amount of product a supplier or a production facility can produce. Like inventory constraints, you can set both minimum and maximum production limits.
In the example below, the Memphis production can produce at most 250 pens in each period. Additionally, the Ashland site has a fixed production constraint, which requires it to always produce exactly 400 pencils in each period.

Facility count constraints limit the number of facilities that can be used in a Cosmic Frog model. These constraints are often used in facility location and selection problems.
For many of the “count” constraints, we want to apply our constraint across a number of different supply chain elements (e.g. across all facilities). To do this, the group functionality of Cosmic Frog is particularly useful, and often necessary. For more detailed information on using groups in constraints, see Using Groups to Write Model Constraints.
In the following example we want to design a supply chain with exactly 2 facilities, and we want to know the optimal location of these 2 facilities. First, we set the status of candidate locations in our Facilities table to “consider”. Without making this change, our model will be forced to include all facilities where Status = “Include” in our network.

Then, we create a constraint requiring the model to select exactly 2 facilities from the “ProductionSites” group.

Flow count constraints limit the number of flows of a particular product. Flow count constraints are particularly useful in problems where the available transportation resources are limited.
In the following example we are implementing a “single-sourced” rule where each customer is served by exactly one production site. For each customer (enumeration) we look across all production sites (aggregation) and set the flow count to be exactly 1.

Production count constraints allow us to manage the number of product types that are produced at different facilities.
In the example below we add two production count constraints. First, we require that only one production site produces pens. Second, we require that each production site makes at most one product from the “Products” group.

Inventory count constraints allow us to limit the number of product types that are stored at our facilities. In the example below we require the Bakersfield DC to hold at most one type of product.

In a supply chain model, sourcing policies describe how network components create and order necessary materials. In Cosmic Frog, sourcing policies and rules appear in two different table categories:


In this section, we describe how to use the model elements tables to define sourcing rules for customers and facilities. Specifically, we can decide if each element is single sourced, allows backorders, and/or allows partial fulfillment.
Single source policies can be defined on either the order level or the line-item level. Setting “Single Source Orders” to “True” for a location means that for each order placed by that location, every item in that order must come from a single source. Setting this value to “False” does not prohibit single sourcing, it just removes the requirement.

Setting “Single Source Line Items” to “True” only requires each individual line-item come from a single source. In other words, even if this is “True”, an individual order can have multiple sources, as long as each line item is single sourced.
If “Single Source Orders” is set to “True” and “Single Source Line Items” is set to “False”, the “Single Source Orders” value takes precedence.
In case an order cannot be fulfilled by the due date (as set on the Customer Orders table in the case of Customers), it is possible to allow backorders where the order will still be filled, but it will be late, by setting the “Allow Backorders” value to “True”. A time limit can be set on this by using the “Backorder Time Limit” field and its UOM field, set to 7 days in the below screenshot. This means that the orders are allowed to be backordered, but if after 7 days the order still is not filled, it is cancelled. Leaving Backorder Time Limit blank means there is no time limit, and the order can be filled late indefinitely.

We can also decide to allow partial fulfillment of orders or individual line-items. If “Allow Partial Fill Orders” is set to “False”, orders need to be filled in full. If set to “True”, then only filling part of an order on time (by the due date) is allowed. What happens with the unfulfilled part of the order depends on if backorders are allowed. If so (“Allow Backorders” = “True”), then the remaining quantity of a partially filled order can be satisfied in the future with additional shipments. If a time limit on backorders is set and is reached on a partially filled order, the remaining quantity will be cancelled. “Partial Fill Orders” and “Partial Fill Line Items” behave similarly to the single sourcing policies, where it is possible to for example allow partially filling orders, but not partially filling line items. If “Partial Fill Orders” is set to “True”, then “Partial Fill Line Items” will also be forced to “True”.

In a supply chain model, sourcing policies describe how network components create and order necessary materials. In Cosmic Frog, sourcing rules & policies appear in two different table categories:


In this section, we will discuss how to use these Sourcing policy tables to incorporate real-world behavior. In the sourcing policy tables we define 4 different types of sourcing relationships:
First we will discuss the options user has for the simulation policy logic used in these 4 tables and the last section covers the other simulation specific fields that can be found on these sourcing policies tables.
Customer fulfillment policies describe which supply chain elements fulfill customer demand. For a Throg (Simulation) run, there are 3 different policy types that we can select in the “Simulation Policy” column:
If “By Preference” is selected, we can provide a ranking describing which sites we want to serve customers for different products. We can describe our preference using the “Simulation Policy Value” column.
In the following example we are describing how to serve customer CZ_CA’s demand. For Product_1, we prefer that demand is fulfilled by DC_AZ. If that is not possible, then we prefer DC_IL to fulfill demand. We can provide rankings for each customer and product combination.
Under this policy, the model will source material from the highest ranked site that can completely fill an order. If no sites can completely fill an order, and if partial fulfillment is allowed, the model will partially fill orders from multiple sources in order of their preference.

If “Single Source” is selected, the customer must receive the given product from 1 specific source, 1 of the 3 DCs in this example.
The “Allocation” policy is similar to the “By Preference” policy, in that it sources from sites in order of a preference ranking. The “Allocation” policy, however, does not look to see whether any sites can completely fill an order before doing partial fulfillment. Instead, it will source as much as possible from source 1, followed by source 2, etc. Note that the “Allocation” and “By Preference” policies will only be distinct if partial fulfillment is allowed for the customer/product combination.

Consider the following example, customer CZ_MA can source the 3 products it puts orders in for from 3 DCs using the By Preference simulation policy. For each product the order of preference is set the same: DC_VA is the top choice, then DC_IL, and DC_AZ is the third (last) choice. Also note that in the Customers table, CZ_MA has been configured so that it is allowed to partially fill orders and line items for this customer.

The first order of the simulation is one that CZ_MA places (screenshot from the Customer Orders table), it orders 20 units of Product_1, 600 units of Product_2, and 160 units of Product_3:

The inventory at the DCs for the products at the time this orders comes in is the same as the initial inventory as this customer order is the first event of the simulation:

When the simulation policy is set to By Preference, we will look to fill the entire order from the highest priority source possible. The first choice is DC_VA, so we check its inventory: it has enough inventory to fill the 20 units of Product_1 (375 units in stock) and the 160 units of Product_3 (500 units in stock), but not enough to fill the 600 units of product_2 (150 units in stock). Since the By Preference policy prefers to single source, it looks at the next priority source, DC_IL. DC_IL does have enough inventory to fulfill the whole order as it has 750 units of Product_1, 1000 units of Product_2, and 300 units of Product_3 in stock.
Now, if we change all the By Preference simulation policies to Allocation via a scenario and run this scenario, the outcomes are different. In this case, as many units as possible are sourced from the first choice DC, DC_VA in this case. This means sourcing 20 units of Product_1, 150 units of Product_2 (all that are in stock), and 160 units of Product_3 from DC_VA. Then next, we look at the second choice source, DC_IL, to see if we can fill the rest of the order that DC_VA cannot fill: the 450 units left of Product_1, which DC_IL does have enough inventory to fill. These differences in sourcing decisions for these 2 scenarios can for example be seen in the Simulation Shipment Report output table:

Replenishment policies describe how internal (i.e. non-customer) supply chain elements source material from other internal sources. For example, they might describe how a distribution center gets material from a manufacturing site. They are analogous to customer fulfillment policies, except instead of requiring a customer name, they require a facility name.

Procurement policies describe how internal (i.e. non-customer) supply chain elements source material from external suppliers. They are analogous to replenishment policies, except instead of using internal sources (e.g. manufacturing sites), they use external suppliers in the Source Name field.

Production policies allow us to describe how material is generated within our supply chain.

There are 4 simulation policies regarding production:
Besides setting the Simulation Policy on each of these Sourcing Policies tables, each has several other fields that the Throg Simulation engine uses as well, if populated. All 4 Sourcing Policies tables contain a Unit Cost and a Lot Size field, plus their UOM fields. The following screenshot shows these fields on the Replenishment Policies table:

The Customer Fulfillment Policies and Replenishment Policies tables both also have an Only Source From Surplus field which can be set to False (default behavior when not set) or True. When set to True, only sources which have available surplus inventory are considered as the source for the customer/facility – product combination. What is considered surplus inventory can be configured using the Surplus fields on the Inventory Policies input table.
Finally, the Production Policies table also has following additional fields:
The Transportation Modes table is an often used optional input table to run a simulation. Mode attributes like fill levels and capacities are specified in this table to control the size of shipments, which will be explained first in this documentation. Rules of precedence when using multiple fill level / capacity fields and when using On Volume / Weight / Quantity transportation simulation policies will be covered also.

The same capacity and fill level fields as for Volume are also available in this table for Quantity and Weight (not shown in the screenshot above).
When utilizing more than 1 of the Fill Level fields, the one that is reached first is applied. For example, if a shipment’s weight has reached the weight fill level, but its volume has not yet reached the volume fill level, the shipment is allowed to be dispatched.
Similarly, if more than 1 Capacity field has been populated, the one that is reached first is applied. For example, if a shipment’s volume has reached the volume capacity but not yet the weight capacity, it cannot be filled up further and will be dispatched.
As mentioned above, when transportation simulation policies of On Quantity / Weight / Volume are being used, the fill levels and capacities of these Modes are specified in the simulation policy value field on the Transportation Policies table. If also using the Transportation Modes table to set any fill level and/or capacity for these modes, user needs to take note of the effects this may have:
Custom Columns can be added to any input table directly through Cosmic Frog. To add a custom column to a table, open the table and then select Grid > Create Custom Column

This will open a window on the right-hand side of the screen where you can configure your custom column. Each of the required entries are used as follows:
This will be the name of the custom column. Please note that column names will only be allowed to contain alphanumeric characters and underscores. Spaces are not allowed in column names, and you will not be able to create a column name that begins with a number. If you enter any sort of an invalid column name, you will not be able to save the column and a message will be displayed noting the issues with the name currently entered.
Pseudo typing allows for the data entered into this column to accept all sorts of input formatting without strictly adhering to the Data Type selected for the custom column. For example, if the Data Type for a given custom column is set to numeric and you attempt to import a string into the field, Pseudo Typed = TRUE will allow for this import to succeed. If Pseudo Typed = FALSE, then this import would fail as the field would only accept numbers. This also applies for editing the field within the grid directly.
For reference, all fields within the standard Anura schema are pseudo typed. We will allow for all entries into the data fields and then cast the entries to their appropriate data type for use within maps, dashboards and the application of scenario items.
Select one of the available data types that the data in your custom column will be formatted in. Depending on the selection chosen for the Pseudo Typed setting, this data type will be strictly enforced (Pseudo Typed = FALSE) or be relaxed and only used when generating visuals or applying scenario items (Pseudo Typed = TRUE).
This setting informs Cosmic Frog if the custom column should be used as a primary key for the table when evaluating file imports. If True, the custom column will be included in addition to the table’s default primary key columns.
When data is imported into a Cosmic Frog model through the Import File action, data is loaded using Upsert logic. This means that rows which already exist in a table will be updated, and rows which do not exist will be inserted. Existing rows will be identified using the table’s primary key – all input tables have a preconfigured set of columns that represent the primary key of the input table. For information on which columns represent the primary key of any given table, please reference Downloadable Anura Data Structure – Inputs | Optilogic.
Once all of the settings have been filled out, click Save and your table will refresh with the custom column now being displayed on the far-right of the table. You’ll notice that the column name has been down cased – this is done to align with standard naming convention for PSQL.
If any issues arise during the Custom Column creation, please reach out to support@optilogic.com for further assistance.
Simulations are generally (mostly) driven by demand specified as customer orders. These orders can be entered in the Customer Orders and/or the Customer Order Profiles input tables. The Customer Orders table typically contains historical transactional demand records to simulate a historical baseline. The Customer Order Profiles table on the other hand contains descriptions of customer order behaviors from which the simulation engine (Throg) generates orders that follow these profiles.
In this documentation we cover both these input table, the Customer Orders table and the Customer Order Profiles table.
To achieve the level of granularity needed and the time-based events to mimic reality as best as possible, every customer order to be simulated is explicitly defined in the customer orders table; this includes line items, order and due dates, and order quantities:

Users can utilize the following additional fields available on the Customer Orders table if required. The single sourcing, allow partial fill, and allow backorder settings behave the same as those that can be set on the Customers table (see this help article), except these here apply to individual orders/individual line items rather than to all orders at the customer over the whole simulation horizon. Note that if these are set here on the Customer Orders table, these values take precedence over any values set for the particular customer in the Customers table:
Rather than specifying individual orders and line items, the Customer Order Profiles table generates these individual orders from profiles which can for example disaggregate monthly demand forecasts into assumed or inferred profiles, using variability to randomize characteristics like quantities and time between orders.


Note that by using start and end dates for profiles, users can control the portion of the simulation horizon in which a profile is used. This enables users to for example capture seasonal demand behaviors by defining a profile for Customer A/Product X in winter, and another profile for the same customer-product combination in summer.
Two scenarios were run, 1 named “CZ_CO P4 profile a” where customer order profile a to generate orders at CZ_CO for Product_4 is included and 1 named “CZ_CO P4 profile b” where customer order profile b to generate orders at CZ_CO for Product_4 is included. These are the profiles shown in the 2 screenshots above. In the Simulation Order Report output table one can see the individual orders generated by these profiles during the simulation runs of these 2 scenarios:

Inventory policies describe how inventory is managed across facilities in our supply chain. These policies can include how and when to replenish, how stock is picked out of inventory, and many other important rules.
In general, we add inventory policies using the Inventory Policies table in Cosmic Frog.

In this documentation we will cover the types of inventory simulation policies available and also other settings contained in the Inventory Policies table.
An (R,Q) policy is a commonly used inventory management approach. Here, when inventory drops below a value of R units, the policy is to order Q units. In Cosmic Frog, when an (R,Q) policy is selected, we can define R and Q in “SimulationPolicyValue1” and “SimulationPolicyValue2”, respectively. We can define the unit of measure (e.g. pallets, volume, individual units, etc.) for both parameters in their corresponding simulation policy value UOM column.
In the following example, MFG_STL has an (R,Q) inventory policy of (100,1900) for Product_2, measured in terms of individual units (i.e. “each”).

(s,S) policies are like (R,Q) policies in that they define a reorder point and how much to reorder. In an(s,S) policy, when inventory is below s units, the policy is to “order up to” S units. In other words, if x is the current inventory level, and x < s, the policy is to order (S-x) units of inventory.
In the example below, DC_VA has an (s,S) inventory policy of (150,750) for Product_1. If inventory dips below 150, the policy is to order so that inventory would replenish to 750 units.

(s,S) policies may also be referred to as (Min,Max) policies; both policy names are accepted in the Anura schema and both behave as described above.
A (T,S) inventory policy is like an (s,S) inventory policy in that whenever inventory is replenished, it is replenished up to level S. Under an (s,S) inventory policy, we check the inventory level in each period when making reorder decisions. In contrast, under a (T,S) inventory policy, the current inventory level is only checked every T periods. During one of these checks, if the inventory level is below S, then inventory is replenished up to level S.
In the example below, DC_VA manages Product_1 using a (T,S) inventory policy. The DC checks the inventory level every 5 days. If inventory is below 750 units during any of these checks, inventory is replenished up to 750 units.

As the name suggests a Do Nothing inventory policy does not trigger any replenishment orders. This policy can for example be used for products that are being phased out or at manufacturing locations where production occurs based on a schedule.
In the example below, MFG_STL uses the Do Nothing inventory policy for the 3 products it manufactures.

On the inventory policies table, other fields available to the user to model inventory include those to set initial inventory, how often inventory is reviewed, and the inventory carrying cost percentage:

When Only Source From Surplus is set to True on a customer fulfillment or a replenishment policy, the Surplus fields on the Inventory Policies table can be used to specify what is considered surplus inventory for a facility – product combination:

Note that if all inventory needs to be pushed out of a location, Push replenishment policies need to be set up for that location (where the location is the Source), and Surplus Level needs to be set to 0.
Inventory Policy Value fields can also be expressed in terms of the number of days of supply to enable the modelling of inventory where the levels go up or down when (forecasted) demand goes up or down. Please see the help center article “Inventory – Days of Supply (Simulation)” to learn more about how this can be set up and the underlying calculations.
Named Filters are an exciting new feature which allows users to create and save specific filters directly on grid views, to then be utilized seamlessly across all policies tables, scenario items and map layers. For example, if you create a filter named “DCs” in the Facilities table to capture all entries with “DC” in their designation, this Named Filter can then be applied in a policy table, providing a dynamic alternative to the traditional Group function.
Unlike Groups, named filters automatically update: adding or removing a DC record in the Facilities table will instantly reflect in the Named Filter, streamlining the workflow and eliminating the need for manual updates. Additionally, when creating Scenario Items or defining Map Layers, users can easily select Named Filters to represent specific conditions, easily previewing the data, making the process much quicker and simpler.
In this help article, how Named Filters are created will be covered first. In the sections after, we will discuss how Named Filters can be used on input tables, in scenario items, and on map layers, while the final section contains a few notes on deleting Named Filters.
Named Filters can be set up and saved on any Cosmic Frog table: input tables, output tables, and custom tables. These tables are found in the Data module of a Cosmic Frog model:


A quick description of each of the options available in the Filter drop-down menu follows here, we will cover most of these in more detail in the remainder of this Help Article:
Let’s walk through setting up a filter on the Facilities table that filters out records where the Facility Name ends in “DC” and save it as a named filter called “DCs”:

After setting up this filter, click on Add Filter in the Filter drop-down menu to save it:


There is a right-click context menu available for filters listed in the Named Filters pane, which allows the user to perform some of the same actions as those in the main Filter menu shown above:

Next, we will see how multiple Named Filters can be applied to a table. As an example, there are 4 Named Filters set up on the Facilities table in this model:

Now, if we want to filter out only Ports located in the USA, we can apply 2 of the Named Filters simultaneously:

In other words, applying multiple Named Filters to a table acts as AND statements: only records that match the criteria of all Named Filters are filtered out.
To alter an existing filter, we can change the criteria of this existing filter, and then save the resulting filter as a new Named Filter. Let’s illustrate this through an example: in a model with about 1.3k customers in the US, we have created a Named Filter “US Midwestern States”, but later on realize that we have not include the state of Michigan as 1 of the 12 Regions to include in this filter:


Now the US Midwestern States Named Filter with this one specific addition is applied. However, this does not update the US Midwestern States Named Filter. The table will now have the field name of Region as the indication of what type of filter is applied a the right top. Click on Add Filter in the Filter drop-down and give the Named Filter a new unique name. Naming it the same as an existing Named Filter to override it is not possible. Say we called the new Named Filter “US Midwestern States New”:

If the US Midwestern States filter was already used on any other tables, scenario items or map layers (see sections below on how to), then the easiest way to start using the new filter with Michigan added to it instead of it would be to 1) either delete or rename the US Midwestern States filter (for example to US Midwestern States Old), and then 2) rename the US Midwestern States New filter to US Midwestern States.
In this example, we added a condition to a field that already had multiple conditions on it. Similar steps can be used to update Named Filters where conditions are removed from fields or conditions are added to fields that do not have conditions on them yet.
So far, the only examples were of filters applied to one field in an input table. The next example shows a Named Filter called “CZ2* Space Suit demand >6k” on the Customer Demand input table:

Conditions were applied to 3 fields in the Customer Demand table, as follows: 1) Customer Name Begins With “CZ2”, 2) Product Name Contains “Space”, and 3) Quantity Greater Than “6000”. The resulting filter was saved as a Named Filter with the name “CZ2* Space Suit demand >6k” which is applied in the screenshot above. When hovering over this Named Filter, we indeed see the 3 fields and that they each have a single condition on them.
Besides being able to create Named Filters on input tables, they can also be created for output and custom tables. On output tables this can expedite the review of results after running additional scenarios where one can apply a pre-saved set of Named Filters one after the other once the runs are done instead of having to re-type each filter that shows the outputs of interest each time. This example shows a Named Filter on the Optimization Facility Summary output table to show records where the Throughput Utilization is greater than 0.8:

The last option of Show Input Data Errors in the Filter menu creates a special filter named ERRORS and filters out records in the input table it is used on that have errors in the input data. This can be very helpful as records with input errors may have these in different fields and the types of errors may be different, so a user is not able to create 1 single filter that would capture multiple different types of errors. When this filter is applied, any record that has 1 or multiple fields with a red outline will be filtered out and shown. Hovering over the field gives a short description of the problem with the value in the field.

Named Filters for certain model elements (i.e. Customers, Facilities, Suppliers, Products, Periods, Modes, Shipments, Transportation Assets, Processes, Bills Of Materials, Work Centers, and Work Resources) can be used in other input tables, very similar to how Groups work in Cosmic Frog: instead of setting up multiple records for individual elements, for example a transportation policy from A to B for each finished good, a Named Filter that filters out all finished goods on the Products table can be used to set up 1 transportation policy for these finished goods from A to B (which at run-time will be expanded into a policy for each finished good). The advantage of using Named Filters instead of Groups is that Named Filters are dynamic. If records are added to tables containing model elements and they match the conditions of any Named Filters, they are automatically added to those Named Filters. Think for example of Products with the pre-fix FG_ to indicate they are finished goods and a Named Filter “Finished Goods” that filters the Product Name on Begins With “FG_”. If a new product is added where the Product Name starts with FG_, it is automatically added to the Finished Goods Named Filter and anywhere this filter is used this new finished good is now included too. We will look at 2 examples in the next few screenshots.

The completed transportation policy record uses Named Filters for the Origin Name, Destination Name, and Product Name, making this record flexible as long as the naming conventions of the factories, ports, and raw materials keep following the same rules.

The next example is on one of the Constraints tables, Production Constraints. On the Constraints tables, the Group Behavior fields dictate how an element name that is a Group or a Named Filter should be used. When set to Enumerate, the constraint is applied to each individual member of the group or named filter. If it is set to Aggregate, the constraint applies to all members of the group or named filter together. This Production Constraint states that at each factory a maximum amount of 150,000 units over all finished goods together can be produced:

When setting up a scenario item, previously, the records that the scenario item’s change needed to be applied to could be set by using the Condition Builder. Now users have the added option to use 1 or multiple saved Named Filters instead, which makes it easier as the user does not need to know the syntax for building a condition, and it also makes it more flexible as Named Filters are dynamic as was discussed in the previous section. In addition, users can preview the records that the change will be made to so the chance of mistakes is reduced.
The following example changes the Suppliers table of a model which has around 70 suppliers, about half of these are in Europe and the other half in China:


The Named Filters drop-down collapses after choosing the China Suppliers Named Filter as the condition, and now we see the Preview of the filtered grid. This is the Suppliers table with the Named Filter China Suppliers applied. At the right top of the grid the name of the applied Named Filter(s) is shown, and we can see that in the preview we indeed only see Suppliers for which the Country is China. So these are the records the change (setting Status = Exclude) will be made to in scenarios that use this scenario item.
Next, we will look at an example where we apply 2 Named Filters as the condition for a scenario item:

After clicking on the Named Filters drop-down list, we select 2 of them:

The FG Rockets and FG Space Suits Named Filters have been selected. FG Rockets only filters out Rockets from the Products table (condition of Contains “rocket” on the Product Name field) and FG Space Suits only filters out Space Suits from the Products table (condition of Contains “space” on the Product Name field). The Filter Grid Preview looks as follows after applying these 2 Named Filters:

In other words, applying multiple Named Filters to a scenario item is additive (acts like OR statements): records that match the criteria of any of the Named Filters are filtered out.
A few notes on the Filter Grid Preview:
Besides using Named Filters on other input tables and for setting up conditions for scenario items, they can also be used as conditions for Map Layers, which will be covered in this final section of this Help Article. Like for scenario items, there is also a Filter Grid Preview for Map Layers to double-check which records will be filtered out when applying the condition(s) of 1 or multiple Named Filters.
In this first example, a Named Filter on the Facilities table filters out only the Facilities that are located in the USA:


Another example of the same model is to use a different Named Filter from the Facilities table to show only Factories on the map:

If the Factories and the Ports Named Filters had both been enabled, then all factories and ports would be showing on the map. So, like for scenario items, applying multiple Named Filters to a Map Layer is additive (acts like OR statements).
The same notes that were listed for the Filter Grid Preview for scenario items apply to the Filter Grid Preview for Map Layers too: columns with conditions have the filter icon on them, users can resize and (multi-)sort the columns, however, re-ordering the columns is not possible.
Named Filters can be deleted, and this affects other input tables, scenario items, and map layers that used the now deleted Named Filter(s) in slightly different ways. This will be explained in this final section of the Help Article on Named Filters.
A Named Filter can be deleted by using the Filter Menu’s “Delete Filter” option or by choosing that same option from the right click context menu of a Named Filter. One first important note is that user will not be asked for a confirmation after choosing the Delete Filter option, it will be deleted immediately.
The results of deleting a Named Filter that was used are as follows:

Removing the deleted Named Filter(s) from these Map Layers resolves the issues.
With Optilogic’s new Teams feature set (see the "Getting Started with Optilogic Teams" help center article) working collaboratively on Cosmic Frog models has never been easier: all members of a team have access to all contents added to that team’s workspace. Centralizing data using Teams ensures there is a single source of truth for files/models which prevents version conflicts. It also enables real-time collaboration where files/models are seamlessly shared across all team members, and updates to any files/models are instantaneous for all team members.
However, whether your organization uses Teams or not, there can be a need to share Cosmic Frog models, for example to:
In this documentation we will cover how to share models, and the different options for sharing. Sharing models can be from an individual user or a team to an individual user or a team. As the risk of something undesirable happening with the model when multiple people work on it increases, it is important to be able to go back to a previous version of the model. Therefore, it is best practice to make a backup of a model prior to sharing it. Continue making backups when important/major changes are going to be made or when wanting to try out something new. How to make a backup of a model will be explained in this documentation too and will be covered first.
A backup of a model is a snapshot of its exact state at a certain point in time. Once a backup has been made, users can use them to revert to if needed. Initiating the creation of a backup of a Cosmic Frog model can be done from 3 locations within the Optilogic platform: 1) from the Models module within Cosmic Frog, 2) through the Explorer and 3) from within the Cloud Storage application on the Optilogic platform. The option from within Cosmic Frog will be covered first:

When in the Models module of Cosmic Frog (its start page), hover over the model you want to create a backup for, and click on the 5th icon that comes up at the bottom right of the model card.
From the Cloud Storage application it works as follows:

Through the Explorer, the process is similar:

Whether from the Models module within Cosmic Frog, through the Cloud Storage application or via the Explorer, in all 3 cases the Create Backup form comes up:

After clicking on Confirm, a notification at the top of the user’s screen will pop up saying that the creation of a backup has been started:

At the same time, a locked database icon with hover over text of “Backup in progress…” appears in the Status field of the model database (this is in the Cloud Storage application’s list of databases):

This locked database icon will disappear again once the backup is complete.
Users can check progress of the backup by going to the user’s Account menu under their username at the right top of the screen and selecting “Account Activity” from the drop-down menu:

To access any backups, users can expand individual model databases in the Cloud Storage application:

There are 2 more columns in the list of databases that are not shown in the screenshot above:

When choosing to restore a backup, the following form comes up:

Now that we have discussed how models can be backed up, we will cover how models can be shared. Note that it is best practice to make a backup of your model before sharing it.
If your organization uses Teams, first make sure you are in the correct workspace, either a Team’s or your personal My Account area, from which you want to share a model. You can switch between workspaces using the Team Hub application, which is explained in this "Optilogic Teams - User Guide" help center article.
Like making a backup of a model database, sharing a model can also be done through the Cloud Storage application and the Explorer. Starting with the Cloud Storage option:

The Share Model options can also be accessed through the Explorer:

Now we will cover the steps of sending a copy of a model to another user or team. The original and the copy are not connected to each other after the model was shared in this way: updates to one are not reflected in the other and vice versa.


After clicking on the Send Model Copy button, a message that says “Model Copy Sent Successfully” will be displayed in the Send Model Copy form. Users can go ahead and send copies of other models to other user(s)/teams(s) or close out of the form by clicking on the cross icon at the right top of the form.
In this example, a copy of the CarAssembly model was sent to the Onboarding team. In the Onboarding team’s workspace this model will then appear in the Explorer:

Next, we will step through transferring ownership of a model to another user or team. The original owner will no longer have access to the model after transferring ownership. In the example here, the Onboarding team will transfer ownership of the Tariffs model to an individual user.


After clicking on the Transfer Model Ownership button, a message that says “Transferred Ownership Successfully” will be displayed in the Transfer Model Ownership form. Users can go ahead and transfer ownership of other models to other user(s)/teams(s) or close out of the form by clicking on the cross icon at the right top of the form.
There will be a notification of the model ownership transfer in the workspace of the user/team that performed the transfer:

The model now becomes visible in the My Account workspace of the individual user the ownership of the model was transferred to:

Lastly, we will show the steps of sharing access to a model with a user or team. Note that Sharing Access to a model can be done from Explorer and from the Cloud Storage application (same as for the Send Copy and Transfer Ownership options), but can also be done from the Models module in Cosmic Frog:

In Cosmic Frog's Models module, hover over the model card of the model you wans to share access to and then click on the 4th icon that comes up in the bottom right of the model card.
In our walk-through example, an individual user will share access to a model called "Fleet Size Optimization - EMEA Geo" with the Onboarding team.



After the plus button was clicked to share access of the Fleet Size Optimization - EMEA Geo model with the Onboarding team, this team is now listed in the People with access list:

Now, in the Onboarding team’s workspace, we can access this model, of which the team receives a notification too:

Now that the Onboarding team has access to this model, they can share it with other users/teams too: they can either send a copy of it or share access, but they cannot transfer ownership as they are not the model’s owner.
In the Explorer of the workspace of the user/team who shared access to the model, a similar round icon with arrow inside it will be shown next to the model’s name. The icon colors are just inverted (blue arrow in white circle) and here the hover text is “You have shared this database”, see the screenshot below. There will also be a notification about having granted access to this model and to whom (not shown in the screenshot):

If the model owner decides to revoke access or change the permission level to a shared model, they need to open the Share Model Access form again by choosing Share Access from the Share Model options / clicking on the Share icon when hovering over the model's card on the Cosmic Frog start page:

If access to a model is revoked, the team/user that was previously granted access but now no longer will have access, receives a notification about this:

With Read-Only access, teammates and stakeholders can explore a shared model, view maps, dashboards, and analytics, and provide feedback — all while ensuring that the data remains unchanged and secure.
Read-Only mode is best suited for situations where protecting data integrity is a priority, for example:
See the Appendix for a complete list of actions and whether they are allowed in Read-Only Access mode or not.
Similar to revoking access to a previously shared model, in order to change the permission level of a shared model, user opens the Share Model Access form again by choosing Share Access from the Share Model options / clicking on the Share icon when hovering over the model's card on the Cosmic Frog start page:

Models with Read-Only access can be recognized on the Optilogic platform as follows:

Input tables of Read-Only Cosmic Frog models are greyed out (like output tables already are by default), and and write actions (insert, delete, modify) are disabled:

Read-Only models can be recognized as follows in other Optilogic applications:
When working with models that have shared access, please keep following in mind:
In addition to the various ways model files can be shared between users, there is a way to share a copy of all contents of a folder with another user/team too:

After clicking on the Create Share Link button, the share link is copied to the clipboard. A toast notification of this is temporarily displayed at the right top in the Optilogic platform. The user can paste the link and send it to the user(s) they want to share the contents of the folder with.
When a user who has received the share link copies it into their browser while logged into the Optilogic platform, the following form will be opened:

Folders copied using the share link option will be copied into a subfolder of the Sent To Me folder. The name of this subfolder will be the username / email of the user / team that sent the share link. The file structure of the sent folder will be maintained and appear the same as it was in the account of the sender of the share link.
See the View Share Links section in the Getting Started with the Explorer help center article on how to manage your share links.
Action - Allowed? - Notes:
By default, the Geocoding option in Cosmic Frog will use Mapbox as the geodata provider. Other supported providers will be Bing, Google, PTV and PC Miler. If you have an access key for an alternate provider and would like to configure this provider as the default option, follow these two steps:
1. Set up your geocoding key under your Account > Geoprovider Keys page

2. Once a new key has been created, set the key as the Default Provider to be used by clicking the Star icon next to the key. This key will now be used for Geocoding in Cosmic Frog

Cosmic Frog supports importing and exporting both CSV and Excel files directly through the application. This enables users to for example:
In this documentation we will cover how users can import and export data into and out of Cosmic Frog, and illustrate this with multiple examples.
There are 2 methods of importing Excel/CSV data into Cosmic Frog’s input tables available to users:
Pointers on how data to be imported needs to be formatted will be covered first, including some tips and call outs of specifics to keep in mind when using the upsert import method. Next, the steps to import a CSV/Excel file will be walked through step by step.
Data is mapped from CSV/Excel files based on matching column names and table names matching to the file name (CSV) or worksheet name (Excel):
Data preparation tips:

CSV vs Excel: CSV files only have 1 “worksheet”, so it can only contain data to be imported into 1 table, whereas Excel files can have multiple worksheets with data to be imported to different tables in Cosmic Frog.
Please take note of how existing records are treated when using the upsert import method to import to a table which already has some data in it:
We will illustrate these behaviors through several examples too.
Users can import 1 or multiple CSV or Excel files simultaneously, please take note of how the import will work for following situations:
Once ready to import the prepared CSV/Excel file(s), user has 2 ways of accessing the import and export methods: from the File menu in the toolbar and from the right-click context menu of an input table. It looks like this from the File menu to import a file:

And when using the right-click context menu the steps to import a file are as follows:

When using the replace import method, a confirmation message will now be shown on which user can click Import to continue the import or Cancel to abort.
Next, a file explorer window opens in which user can browse to and select the CSV/Excel file(s) to import:

Once the import starts, a status message shows at the top of the active table:

The Model Activity log will also have an entry for each import action:

User can see the results of the import by opening and inspecting the affected input table(s), and by looking at the row counts for the tables in the input tables list, outlined in green in this screenshot:

A common way to start building a new model in Cosmic Frog is to make use of the replace import method to populate multiple tables simultaneously with data from Excel or CSV files. These files have typically been prepared from ERP extracts which have been manipulated to match the Cosmic Frog table and column names. This way, users do not need to enter data manually into the Cosmic Frog input tables, which would be very laborious. Note that it can be helpful to first export empty tables from a new, empty Cosmic Frog model to have a template to start filling out (see the “Exporting to CSV/Excel Files” section further below on how to do this).
Starting with an empty new model in Cosmic Frog:

User has prepared the following Excel .xlsx file:

After importing this file into Cosmic Frog, we notice that the Customers, Facilities and Products tables now have row counts that match the number of records we had in the Excel file that was used for the import, and we can open the individual tables to see the imported records:

Consider user is modelling a sports equipment company and has populated the Products table of a Cosmic Frog model with 8 products as follows:

After working with the model for a while, the user realizes a few things:
As item number 1 will change the product names, a column that is part of the primary key of the Products table, user will need to use the replace import method to make these changes as the upsert method does not change the values of columns that are part of the primary key. Following is the .xlsx file user prepares to replace the data in the Products table with:

After importing the file using the replace method, the Products table looks like this:

We see the records are the exact same as what was contained in the Products.xlsx file that was imported, and the row count for the Products table has correctly gone up to 10 with the 2 new products added.
Continuing from the Products table in the last screenshot above, user now wants to make a few additional changes as follows:
To make these changes to the Products table, the user prepares the following Products file to be upserted to the Products table, where the green numbers in the screenshot below match the items described in the bullet point list directly above:

After using the upsert import method for this file into the Products table, it contains following records. The ones changed / added are listed at the bottom:

In the boxes outlined in green we see that all the expected changes and the insertion of the 1 new record have been made.
Let us also illustrate what will happen when files with invalid /missing data are imported. We will use the replace import method for the example here, but similar results will be seen when using the upsert method. Following screenshot shows a Products table that has been prepared in Excel, where we can see several issues already: a blank Product Name, a negative value for Unit Price, etc.

After this file is imported to the Products table using the replace method, the Products table will look as follows:

The cells that are outlined in red contain invalid values. Hovering over each cell will show a tooltip message describing the problem.
For tables with many records, it may be hard to find the fields in red outline manually. To help with this, there is a standard filter user can apply that will show all records that have 1 or multiple input data errors:

In conclusion, Cosmic Frog will let a user import invalid data, and then helps user identify the data issues with the red outlines, hover over tooltips, and the Show Input Data Errors filter.
Consider following Transportation Policies table:

There is now a change where from MFG_1 all Racket products need to be shipped by Parcel for a fixed cost of $50. User creates 2 Named Filters (see the Named Filters in Cosmic Frog help center article) in the Products table: 1 that filters out all racket products (those products that have a product name that start with FG_Racket) which is named Rackets and 1 that filters out all non-racket products (those products that do not contain racket in the product name) which is named AllExceptRackets. Next, user prepares following TransportationPolicies.csv file to upsert into the Transportation policies table with the intention to update the first 2 records in the existing table to be specific for the AllExceptRackets products and add 2 new ones for the Rackets products:

The result of using this file to upsert to the Transportation Policies table is as follows:

This example shows that users need to be mindful of which fields are part of the table’s primary key and remember that values of primary key fields cannot be changed by the upsert import method. An example workflow that will achieve the desired changes to the Transportation Policies table is as follows:
It is possible to export a single table or multiple tables (input and output tables) to CSV or Excel from Cosmic Frog. Similar to importing data from CSV/Excel, user can access the export options in 2 ways: from the File menu in the toolbar and from the context menus that come up when right-clicking on tables in the input/output/custom tables lists.
Please note:
The steps to export multiple tables to an Excel file are as follows:

Once the export starts, following message appears at the top of the active table:

Once the export is complete, the exported file can be found in the folder where user’s downloaded files are saved:

When exporting multiple tables to Excel or CSV, the downloaded file will be a .zip file with an automatically generated name based on the model’s Cosmic Frog ID. Extracting the zip-file will show an .xlsx file of the same name, which can be opened in Excel:

These are the steps to export multiple tables to CSV:

When the export starts, the same “File is exporting…” message as shown in the previous section will be showing at the top of the active table. Once the export process is finished, the exported file can again be found in the folder where user’s downloaded files are saved:

The file is again a zip-file, and it has the same name based on the model’s Cosmic Frog ID, just appended with (1), as there is already a zip-file of the same name in the Downloads folder from the previous export to Excel. Unzipping the file creates a new sub-folder of the same name in the Downloads folder:

Exporting a single table to Excel can also be done from the File menu, in the same way as multiple tables are exported to Excel, which was shown above in the “Export Multiple Tables to Excel” section. Now, we will show the second way of doing this by using the context menu that comes up when right-clicking on a table:

When the export starts, the same “File is exporting…” message as shown above will be showing at the top of the active table. Once the export process is finished, the exported file can again be found in the folder where user’s downloaded files are saved:

The name of the exported CSV file matches that of the table that was exported.
Exporting a single table to CSV can also be done from the File menu, in the same way as multiple tables are exported to CSV, which was shown above in the “Export Multiple Tables to CSV” section. Now, we will show the second way of doing this by using the context menu that comes up when right-clicking on a table:

When the export starts, the same “File is exporting…” message as shown above will be showing at the top of the active table. Once the export process is finished, the exported file can again be found in the folder where user’s downloaded files are saved:

For single tables exported to CSV, the name of the file is the same as the name of the exported table. If the Cosmic Frog table was filtered, the file name is appended with “_filtered” like it is here to remind user that only the filtered rows are contained in this exported file.
When downloading all of the related files for any of the sample Excel Apps that are posted to our Resource Library, you will likely encounter macro-enabled Excel workbooks (.xlsm) which might have their macros disabled following download.

To enable the macros, you can right-click on the Excel file in a File Explorer and select the option for Properties. In the Properties menu, check the box in the Security section to Unblock the file and then hit Apply.

You can then re-open the document and should see the macros are now enabled.
Some of the Excel templates also make use of add-ins to help expand the workbook’s capabilities. An example of this is the ArcGIS add-in which allows for map visuals to be created directly in the workbook. It is possible that these add-ins might be disabled by default in the Microsoft Office settings, if that is the case you should see an error message as follows:

This can be resolved by updating your Privacy Settings under your Microsoft Office Account page. To access this menu, click into File > Account > Account Privacy > Manage Settings. You will then want to make sure that in the Optional Connected Experiences, the option to Turn on optional connected experiences is enabled.

Updating this value will require a restart of Microsoft Office. Once the restart is completed, you should see that the add-ins are now enabled and working as expected.
If other issues come from any Resource Library templates please do not hesitate to reach out to support@optilogic.com.
Teams is an exciting new feature set designed to enhance collaboration within Supply Chain Design, enabling companies to foster a more connected and efficient working environment. With Teams, users can join a shared workspace where all team members have seamless access to collective models and files. This ensures that every piece of work remains synchronized, providing a single source of truth for your data. When one team member updates a file, those changes instantly reflect for all other members, eliminating inconsistencies and ensuring that everyone stays aligned.
Beyond simply improving collaboration, Teams offers a structured and flexible way to organize your projects. Instead of keeping all your files and models confined to a personal account, you can now create distinct teams tailored to different projects, departments, or business functions. This means greater clarity and easier navigation between workspaces, ensuring that the right content is always at your fingertips.
Consider the possibilities:
Teams introduces a more intuitive and structured way to collaborate, organize, and access your work—ensuring that your team members always have the latest updates and a streamlined experience. Get started today and transform the way you work together!
This documentation contains a high-level overview of the Teams feature set, details the steps to get started, gives examples of how Teams can be structured, and covers best practices. More detailed documentation for Organization Administrators and Teams Users is available in the following help center articles:
The diagram below highlights the main building blocks of the Teams feature set:

At a high-level, these are the steps to start using the Teams feature set:
Here follow 5 examples of how teams can be structured, including an example for each and an explanation of why such a setup works well.
Please keep following best practices in mind to ensure optimal use of the Teams feature set:
Once you have set up your teams and added content, you are ready to start collaborating and unlocking the full potential of Teams within Optilogic!
Let us know if you need help along the way—our support team (support@optilogic.com) has your back.
The File Explorer is a fully functioning view into your workspace file system. To open the File Explorer, look for the following icon on the left-hand side of the Atlas environment:

To expand a folder to view its contents simply click anywhere on the folder, its label, or the expand icon (the small triangle next to the folder itself).

To collapse the folder you can either click on the same area again, or you can collapse all folders by clicking the button outlined below.

To open a file, navigate to it in the file explorer and double-click on it. Once done, the file will load into the editor. Please note that some files will not show properly in the editor, such as files that contain binary content or encrypted content.
There are two ways you can create a new file; through the file menu:

or via the file explorer context menu.

Keep in mind that the context you have selected in the file explorer is where the new folder will go, so if you want the folder underneath a specific other folder make sure to select that folder before you begin. That being said, if you forget you can still move the folder afterward.
To duplicate a file or folder, select the Duplicate command from the right-click context menu. This will create a copy of the file or folder in the same location with the suffix _copy.

To rename a file or folder, select the Rename command from the right-click context menu or use the hotkey F2 while the file or folder is selected. This will present a dialog where you can type in the new name that you want to use. Once specified, hit Enter or click the OK button to apply the new name.
To upload a file or files, make sure that you have a folder selected. From the right-click context menu select the Upload Files… command or select the same command from the file menu. Select the file or files from the file selection dialog and hit Enter or click the Open button. This will upload the selected files to the folder that you have selected.
To download a file or folder, select the Download command from the right-click context menu or the same command from the file menu. This will download the selected file or folder to your local machine. If you have selected a folder, the contents will be compressed into a .tar archive file. You will need to use a file archiving tool to extract the contents of the archive.
To save a file, select the Save option from the file menu or use the hotkey CTRL+S. This will save the changes in the file that has focus in the editor. Note, a file with focus will have the tab in the editor have the same background as the background of the editor itself. You can tell if a file needs to be saved by the presence of a white dot in the file header open in the editor.
You can delete a file or folder by selecting the Delete command from the right-click context menu or hitting the Delete key on your keyboard.
By default, the auto-save option is turned on. This means that your files will automatically save as you are editing them. You can turn this on or off by selecting the Auto Save option in the file menu.

Atlas includes a file comparison tool. This will show you line-by-line differences between two files.

To use it you can either select the Compare with Each Other command from the right-click context menu while two files are selected, or you can select the Select for Compare command on the first file, then select the Compare with Selected command on the other file you would like to compare.

When there are changes to your workspace file system but they haven’t shown up visually, you can refresh the explorer to show you the latest state on disk. This can happen sometimes if the models you are building are writing out files. To refresh the explorer, click the refresh button in the upper right.

When a Cosmic Frog model has been built and scenarios of interest have been created, it is usually time to run 1 or multiple scenarios. This documentation covers how scenarios can be kicked off, and which run parameters can be configured by users.
When ready to run scenarios, user can click on the green Run button at the right top of the Cosmic Frog screen:

This will bring up the Run Settings screen:

Please note that:

While we will discuss all parameters for each technology in the next sections of the documentation, users can find explanations for each of them within Cosmic Frog too: when hovering over a parameter, a tooltip explaining the parameter will be shown. In the next screenshot, user has expanded the Termination section within the Optimization (Neo) parameters section and is hovering with the mouse over the “MIP Relative Gap Percentage” parameter, which brings up the tooltip explaining this parameter:

When selecting one or multiple scenarios to be run with the same engine, the corresponding technology parameters are automatically expanded in the Technology Parameters part of the Run Settings screen:

When enabled, the Check Infeasibility parameter will run an infeasibility diagnostic on the model in order to identify any cause(s) of the scenario being infeasible rather than optimizing the scenario. For more information on running infeasibility diagnostics, please see this Help article.







This can help users troubleshoot issues. For example, if a simulation run takes longer than expected, users can review the Job Log to see if it gets stuck on any particular day during the modelling horizon.

The inventory engine in Cosmic Frog (called Dendro) uses a genetic algorithm to evaluate possible solutions in a successive manner. The parameters that can be set dictate how deep the search space is and how solutions can evolve from one generation to the next. Using the parameter defaults will suffice for most inventory problems, they are however available to change as needed for experienced users.


Please feel free to contact Optilogic Support at support@optilogic.com in case of questions or feedback.