DataStar Quick Start: Export Data to a Cosmic Frog Model

DataStar Quick Start: Export Data to a Cosmic Frog Model

In this quick start guide we will walk through the steps of exporting data from a table in the Project Sandbox to a table in a Cosmic Frog model.

Please note that DataStar is currently in the Early Adopter phase, where only users who participate in the Early Adopter program have access to it. DataStar is rapidly evolving while we work towards the General Availability release later this year. For any questions about DataStar or the Early Adopter program, please feel free to reach out to Optilogic’s support team on support@optilogic.com.

Initial Setup and Quick Start Steps

This quick start guide builds upon a previous one where unique customers were created from historical shipments using a Leapfrog-generated Run SQL task. Please follow the steps in that quick start guide first if you want to follow along with the steps in this one. The starting point for this quick start is therefore a project named Import Historical Shipments, which contains a macro called Import Shipments. This macro has an Import task and a Run SQL task. The project has a Historical Shipments data connection of type = CSV, and the Project Sandbox contains 2 tables named rawshipments(42,656 records) and customers (1,333 records).

The steps we will walk through in this quick start guide are:

  1. Create an empty Cosmic Frog model.
  2. Set up a data connection to the empty Cosmic Frog model.
  3. Add and configure an Export task.
  4. Check the results of the complete macro.

Create an empty Cosmic Frog Model

First, we will create a new Cosmic Frog model which does not have any data in it. We want to use this model to receive the data we export from the Project Sandbox.

As shown with the numbered steps in the screenshot below: while on the start page of Cosmic Frog, click on the Create Model button at the top of the screen. In the Create Frog Model form that comes up, type the model name, optionally add a description, and select the Empty Model option. Click on the Create Model button to complete the creation of the new model:

Create Cosmic Frog Model Data Connection

Next, we want to create a connection to the just created empty Cosmic Frog model in DataStar. To do so: open your DataStar application, then click on the Create Data Connection button at the top of the screen. In the Create Data Connection form that comes up, type the name of the connection (we are using the same name as the model, i.e. “Empty CF Model for DataStar Export”),optionally add a description, select Cosmic Frog Models in the Connection Type drop-down list, click on the name of the newly created empty model in the list of models, and click on Add Connection. The new data connection will now be shown in the list of connections on the Data Connections tab (shown in list format here):

Now, go to the Projects tab, and click on the “Import Historical Shipments” project to open it. We will first have a look at the Project Sandbox and the empty Cosmic Frog model connections, so click on the Data Connections tab:

  1. We have opened the Data Connections tab.
  2. The Project Sandbox connection has been expanded.
  3. We see that the customers table in the Project Sandbox contains 1.33k records. This was the result of creating unique customers from historical shipment data in the previous quick start guide.
  4. The empty Cosmic Frog model connection has been expanded too.
  5. Scrolling down the list of tables in the model, we see that the customers table has 0 records.

Add Export Task

The next step is to add and configure an Export Task to the Import Shipments macro. Click on the Macros tab in the panel on the left-hand side, and then on the Import Shipments macro to open it. Click on the Export task in the Tasks panel on the right-hand side and drag it onto the Macro Canvas. If you drag it close to the Run SQL task, it will automatically connect to it once you drop the Export task:

The Configuration panel on the right has now become the active panel:

  1. We give the task the name of “Export Customers”.
  2. In the Data Connection section, we will configure the source and destination of the export, starting with the source:
    1. Select the Project Sandbox as the Data Connection to export from.
    2. From the table drop-down, select the customers table.
  3. It is possible to either export to a Data Connection or export as a .CSV file. We choose the first option.
  4. Now we need to configure the destination of the export:
    1. In the drop-down list of Data Connections, select the “Empty CF Model for DataStar Export” connection to export into.
    2. It is possible for the exported data to be added to either a new or an existing table. Select the Existing Table option from the drop-down list.
    3. Now we can click on the Tables drop-down list and scroll to the customers table; select it so it will receive the exported data.
    4. In future, users can choose between Replace, which will overwrite any existing data already present in the table, and Upsert, which will append new data and update any already existing data. For now, only the Replace option is available.
  5. In the Data Mapping section, we need to indicate which source column’s data will be exported into which destination column.
    1. On the left-hand side, the source column names and their data types are listed.
    2. On the right-hand side, we need to indicate which destination column the source column should export into. We could go through and select the desired column from the drop-down list of all columns in the destination data connection, but we will first give the AutoMap option a try, see the next bullet and screenshot. More details on manually mapping can be found in the Data Mapping Details section further below.
    3. The AutoMap button can be used to automatically map source columns to destination columns. This is done based on matching column names.

Click on the AutoMap button, and in the message that comes up, select either Replace Mappings or Add New Mappings. Since we have not mapped anything yet, the result will be the same in this case. After using the AutoMap option, the mapping looks as follows:

We see that each source column is now mapped to a destination column of the same name. This is what we expect, since in the previous quick start guide, we made sure to tell Leapfrog when generating the Run SQL task for creating unique customers, to match the schema of the customers table in Cosmic Frog models (“the Anura schema”).

Check the Results

If the Import Shipments macro has been run previously, we can just run the new Export Customers task by itself (hover over the task in the Macro Canvas and click on the play button that comes up), otherwise we can choose to run the full macro by clicking on the green Run button at the right top. Once completed, click on the Data Connections tab to check the results:

  1. Expand the Empty CF Model for DataStar Export connection and scroll down to the customers table: it now contains 1.33krecords. Click on it to open it in the central part of DataStar.
  2. The customers table is now open.
  3. We can review that each customer is present and has its latitude and longitude columns populated (they were moved left for the screenshot; you may need to scroll right to see them).

Data Mapping Details

Above, the AutoMap functionality was used to map all 3 source columns to the correct destination columns. Here, we will go into some more detail on manually mapping and additional options users have to quickly sort and filter the list of mappings.

  1. To map a Source Column, click on the corresponding field in the Destination Column column, and a drop-down with the list of columns present in the destination data connection will come up.
  2. To quickly find a column in the list, use the Search box by typing text in it. Columns with the typed text in their names will be filtered out and shown only.
  3. The destination columns can also be filtered by type. Click on the filter icon and check the checkbox(es) of the data types that should be filtered out. The available data types to select from are: Boolean, Date Time, Number, and Text. Note that each data type can be recognized by its first letter abbreviation and distinct font color. These are used in the Type columns and also next to the column names in the drop-down list. For the text data type it is a green T, for number a blue N, etc.
  4. If the currently selected field has already been mapped, then it will be highlighted in the drop-down list of column names in the destination data connection. Here, customername is highlighted as this is the current mapping of this field.
  5. If the currently selected field has already been mapped, an option to remove the mapping is present at the bottom of the drop-down list. Select this “Remove Selection” option to undo the current mapping.
  6. If columns in the destination data connection have already been mapped to a different source column, they will be greyed out in the drop-down list. They cannot be selected. When hovering over these, hover text telling the user which source column this destination column is mapped to comes up.
  7. Click on the eye icon to only show the source columns that have been mapped. Unmapped columns will be hidden. This can be convenient when mapping a select subset of many columns: when you think you have mapped all the needed columns, use this to only show the mapped columns and double-check they are all there. Clicking on the icon again will show all source columns again.
  8. The list of mappings can also be filtered based on the data type of the source columns. Click on this icon to bring up the checkboxes to check/uncheck the data types that should be shown. The available data types to select from are: Boolean, Date Time, Number, and Text.
  9. The list of mappings can be sorted. Click on this icon and choose one of the following three sort options:
    1. Source Table Order: orders the mappings based on the sequence of the source columns in the source table.
    2. Name (A-Z): orders the mappings in alphabetical order of the source column names.
    3. Type: orders the mappings in alphabetical order of the source column types, e.g. all Boolean columns first, then all Date Time columns, etc.

Helpful Resources

DataStar Quick Start: Export Data to a Cosmic Frog Model

In this quick start guide we will walk through the steps of exporting data from a table in the Project Sandbox to a table in a Cosmic Frog model.

Please note that DataStar is currently in the Early Adopter phase, where only users who participate in the Early Adopter program have access to it. DataStar is rapidly evolving while we work towards the General Availability release later this year. For any questions about DataStar or the Early Adopter program, please feel free to reach out to Optilogic’s support team on support@optilogic.com.

Initial Setup and Quick Start Steps

This quick start guide builds upon a previous one where unique customers were created from historical shipments using a Leapfrog-generated Run SQL task. Please follow the steps in that quick start guide first if you want to follow along with the steps in this one. The starting point for this quick start is therefore a project named Import Historical Shipments, which contains a macro called Import Shipments. This macro has an Import task and a Run SQL task. The project has a Historical Shipments data connection of type = CSV, and the Project Sandbox contains 2 tables named rawshipments(42,656 records) and customers (1,333 records).

The steps we will walk through in this quick start guide are:

  1. Create an empty Cosmic Frog model.
  2. Set up a data connection to the empty Cosmic Frog model.
  3. Add and configure an Export task.
  4. Check the results of the complete macro.

Create an empty Cosmic Frog Model

First, we will create a new Cosmic Frog model which does not have any data in it. We want to use this model to receive the data we export from the Project Sandbox.

As shown with the numbered steps in the screenshot below: while on the start page of Cosmic Frog, click on the Create Model button at the top of the screen. In the Create Frog Model form that comes up, type the model name, optionally add a description, and select the Empty Model option. Click on the Create Model button to complete the creation of the new model:

Create Cosmic Frog Model Data Connection

Next, we want to create a connection to the just created empty Cosmic Frog model in DataStar. To do so: open your DataStar application, then click on the Create Data Connection button at the top of the screen. In the Create Data Connection form that comes up, type the name of the connection (we are using the same name as the model, i.e. “Empty CF Model for DataStar Export”),optionally add a description, select Cosmic Frog Models in the Connection Type drop-down list, click on the name of the newly created empty model in the list of models, and click on Add Connection. The new data connection will now be shown in the list of connections on the Data Connections tab (shown in list format here):

Now, go to the Projects tab, and click on the “Import Historical Shipments” project to open it. We will first have a look at the Project Sandbox and the empty Cosmic Frog model connections, so click on the Data Connections tab:

  1. We have opened the Data Connections tab.
  2. The Project Sandbox connection has been expanded.
  3. We see that the customers table in the Project Sandbox contains 1.33k records. This was the result of creating unique customers from historical shipment data in the previous quick start guide.
  4. The empty Cosmic Frog model connection has been expanded too.
  5. Scrolling down the list of tables in the model, we see that the customers table has 0 records.

Add Export Task

The next step is to add and configure an Export Task to the Import Shipments macro. Click on the Macros tab in the panel on the left-hand side, and then on the Import Shipments macro to open it. Click on the Export task in the Tasks panel on the right-hand side and drag it onto the Macro Canvas. If you drag it close to the Run SQL task, it will automatically connect to it once you drop the Export task:

The Configuration panel on the right has now become the active panel:

  1. We give the task the name of “Export Customers”.
  2. In the Data Connection section, we will configure the source and destination of the export, starting with the source:
    1. Select the Project Sandbox as the Data Connection to export from.
    2. From the table drop-down, select the customers table.
  3. It is possible to either export to a Data Connection or export as a .CSV file. We choose the first option.
  4. Now we need to configure the destination of the export:
    1. In the drop-down list of Data Connections, select the “Empty CF Model for DataStar Export” connection to export into.
    2. It is possible for the exported data to be added to either a new or an existing table. Select the Existing Table option from the drop-down list.
    3. Now we can click on the Tables drop-down list and scroll to the customers table; select it so it will receive the exported data.
    4. In future, users can choose between Replace, which will overwrite any existing data already present in the table, and Upsert, which will append new data and update any already existing data. For now, only the Replace option is available.
  5. In the Data Mapping section, we need to indicate which source column’s data will be exported into which destination column.
    1. On the left-hand side, the source column names and their data types are listed.
    2. On the right-hand side, we need to indicate which destination column the source column should export into. We could go through and select the desired column from the drop-down list of all columns in the destination data connection, but we will first give the AutoMap option a try, see the next bullet and screenshot. More details on manually mapping can be found in the Data Mapping Details section further below.
    3. The AutoMap button can be used to automatically map source columns to destination columns. This is done based on matching column names.

Click on the AutoMap button, and in the message that comes up, select either Replace Mappings or Add New Mappings. Since we have not mapped anything yet, the result will be the same in this case. After using the AutoMap option, the mapping looks as follows:

We see that each source column is now mapped to a destination column of the same name. This is what we expect, since in the previous quick start guide, we made sure to tell Leapfrog when generating the Run SQL task for creating unique customers, to match the schema of the customers table in Cosmic Frog models (“the Anura schema”).

Check the Results

If the Import Shipments macro has been run previously, we can just run the new Export Customers task by itself (hover over the task in the Macro Canvas and click on the play button that comes up), otherwise we can choose to run the full macro by clicking on the green Run button at the right top. Once completed, click on the Data Connections tab to check the results:

  1. Expand the Empty CF Model for DataStar Export connection and scroll down to the customers table: it now contains 1.33krecords. Click on it to open it in the central part of DataStar.
  2. The customers table is now open.
  3. We can review that each customer is present and has its latitude and longitude columns populated (they were moved left for the screenshot; you may need to scroll right to see them).

Data Mapping Details

Above, the AutoMap functionality was used to map all 3 source columns to the correct destination columns. Here, we will go into some more detail on manually mapping and additional options users have to quickly sort and filter the list of mappings.

  1. To map a Source Column, click on the corresponding field in the Destination Column column, and a drop-down with the list of columns present in the destination data connection will come up.
  2. To quickly find a column in the list, use the Search box by typing text in it. Columns with the typed text in their names will be filtered out and shown only.
  3. The destination columns can also be filtered by type. Click on the filter icon and check the checkbox(es) of the data types that should be filtered out. The available data types to select from are: Boolean, Date Time, Number, and Text. Note that each data type can be recognized by its first letter abbreviation and distinct font color. These are used in the Type columns and also next to the column names in the drop-down list. For the text data type it is a green T, for number a blue N, etc.
  4. If the currently selected field has already been mapped, then it will be highlighted in the drop-down list of column names in the destination data connection. Here, customername is highlighted as this is the current mapping of this field.
  5. If the currently selected field has already been mapped, an option to remove the mapping is present at the bottom of the drop-down list. Select this “Remove Selection” option to undo the current mapping.
  6. If columns in the destination data connection have already been mapped to a different source column, they will be greyed out in the drop-down list. They cannot be selected. When hovering over these, hover text telling the user which source column this destination column is mapped to comes up.
  7. Click on the eye icon to only show the source columns that have been mapped. Unmapped columns will be hidden. This can be convenient when mapping a select subset of many columns: when you think you have mapped all the needed columns, use this to only show the mapped columns and double-check they are all there. Clicking on the icon again will show all source columns again.
  8. The list of mappings can also be filtered based on the data type of the source columns. Click on this icon to bring up the checkboxes to check/uncheck the data types that should be shown. The available data types to select from are: Boolean, Date Time, Number, and Text.
  9. The list of mappings can be sorted. Click on this icon and choose one of the following three sort options:
    1. Source Table Order: orders the mappings based on the sequence of the source columns in the source table.
    2. Name (A-Z): orders the mappings in alphabetical order of the source column names.
    3. Type: orders the mappings in alphabetical order of the source column types, e.g. all Boolean columns first, then all Date Time columns, etc.

Helpful Resources

DataStar Quick Start: Export Data to a Cosmic Frog Model

In this quick start guide we will walk through the steps of exporting data from a table in the Project Sandbox to a table in a Cosmic Frog model.

Please note that DataStar is currently in the Early Adopter phase, where only users who participate in the Early Adopter program have access to it. DataStar is rapidly evolving while we work towards the General Availability release later this year. For any questions about DataStar or the Early Adopter program, please feel free to reach out to Optilogic’s support team on support@optilogic.com.

Initial Setup and Quick Start Steps

This quick start guide builds upon a previous one where unique customers were created from historical shipments using a Leapfrog-generated Run SQL task. Please follow the steps in that quick start guide first if you want to follow along with the steps in this one. The starting point for this quick start is therefore a project named Import Historical Shipments, which contains a macro called Import Shipments. This macro has an Import task and a Run SQL task. The project has a Historical Shipments data connection of type = CSV, and the Project Sandbox contains 2 tables named rawshipments(42,656 records) and customers (1,333 records).

The steps we will walk through in this quick start guide are:

  1. Create an empty Cosmic Frog model.
  2. Set up a data connection to the empty Cosmic Frog model.
  3. Add and configure an Export task.
  4. Check the results of the complete macro.

Create an empty Cosmic Frog Model

First, we will create a new Cosmic Frog model which does not have any data in it. We want to use this model to receive the data we export from the Project Sandbox.

As shown with the numbered steps in the screenshot below: while on the start page of Cosmic Frog, click on the Create Model button at the top of the screen. In the Create Frog Model form that comes up, type the model name, optionally add a description, and select the Empty Model option. Click on the Create Model button to complete the creation of the new model:

Create Cosmic Frog Model Data Connection

Next, we want to create a connection to the just created empty Cosmic Frog model in DataStar. To do so: open your DataStar application, then click on the Create Data Connection button at the top of the screen. In the Create Data Connection form that comes up, type the name of the connection (we are using the same name as the model, i.e. “Empty CF Model for DataStar Export”),optionally add a description, select Cosmic Frog Models in the Connection Type drop-down list, click on the name of the newly created empty model in the list of models, and click on Add Connection. The new data connection will now be shown in the list of connections on the Data Connections tab (shown in list format here):

Now, go to the Projects tab, and click on the “Import Historical Shipments” project to open it. We will first have a look at the Project Sandbox and the empty Cosmic Frog model connections, so click on the Data Connections tab:

  1. We have opened the Data Connections tab.
  2. The Project Sandbox connection has been expanded.
  3. We see that the customers table in the Project Sandbox contains 1.33k records. This was the result of creating unique customers from historical shipment data in the previous quick start guide.
  4. The empty Cosmic Frog model connection has been expanded too.
  5. Scrolling down the list of tables in the model, we see that the customers table has 0 records.

Add Export Task

The next step is to add and configure an Export Task to the Import Shipments macro. Click on the Macros tab in the panel on the left-hand side, and then on the Import Shipments macro to open it. Click on the Export task in the Tasks panel on the right-hand side and drag it onto the Macro Canvas. If you drag it close to the Run SQL task, it will automatically connect to it once you drop the Export task:

The Configuration panel on the right has now become the active panel:

  1. We give the task the name of “Export Customers”.
  2. In the Data Connection section, we will configure the source and destination of the export, starting with the source:
    1. Select the Project Sandbox as the Data Connection to export from.
    2. From the table drop-down, select the customers table.
  3. It is possible to either export to a Data Connection or export as a .CSV file. We choose the first option.
  4. Now we need to configure the destination of the export:
    1. In the drop-down list of Data Connections, select the “Empty CF Model for DataStar Export” connection to export into.
    2. It is possible for the exported data to be added to either a new or an existing table. Select the Existing Table option from the drop-down list.
    3. Now we can click on the Tables drop-down list and scroll to the customers table; select it so it will receive the exported data.
    4. In future, users can choose between Replace, which will overwrite any existing data already present in the table, and Upsert, which will append new data and update any already existing data. For now, only the Replace option is available.
  5. In the Data Mapping section, we need to indicate which source column’s data will be exported into which destination column.
    1. On the left-hand side, the source column names and their data types are listed.
    2. On the right-hand side, we need to indicate which destination column the source column should export into. We could go through and select the desired column from the drop-down list of all columns in the destination data connection, but we will first give the AutoMap option a try, see the next bullet and screenshot. More details on manually mapping can be found in the Data Mapping Details section further below.
    3. The AutoMap button can be used to automatically map source columns to destination columns. This is done based on matching column names.

Click on the AutoMap button, and in the message that comes up, select either Replace Mappings or Add New Mappings. Since we have not mapped anything yet, the result will be the same in this case. After using the AutoMap option, the mapping looks as follows:

We see that each source column is now mapped to a destination column of the same name. This is what we expect, since in the previous quick start guide, we made sure to tell Leapfrog when generating the Run SQL task for creating unique customers, to match the schema of the customers table in Cosmic Frog models (“the Anura schema”).

Check the Results

If the Import Shipments macro has been run previously, we can just run the new Export Customers task by itself (hover over the task in the Macro Canvas and click on the play button that comes up), otherwise we can choose to run the full macro by clicking on the green Run button at the right top. Once completed, click on the Data Connections tab to check the results:

  1. Expand the Empty CF Model for DataStar Export connection and scroll down to the customers table: it now contains 1.33krecords. Click on it to open it in the central part of DataStar.
  2. The customers table is now open.
  3. We can review that each customer is present and has its latitude and longitude columns populated (they were moved left for the screenshot; you may need to scroll right to see them).

Data Mapping Details

Above, the AutoMap functionality was used to map all 3 source columns to the correct destination columns. Here, we will go into some more detail on manually mapping and additional options users have to quickly sort and filter the list of mappings.

  1. To map a Source Column, click on the corresponding field in the Destination Column column, and a drop-down with the list of columns present in the destination data connection will come up.
  2. To quickly find a column in the list, use the Search box by typing text in it. Columns with the typed text in their names will be filtered out and shown only.
  3. The destination columns can also be filtered by type. Click on the filter icon and check the checkbox(es) of the data types that should be filtered out. The available data types to select from are: Boolean, Date Time, Number, and Text. Note that each data type can be recognized by its first letter abbreviation and distinct font color. These are used in the Type columns and also next to the column names in the drop-down list. For the text data type it is a green T, for number a blue N, etc.
  4. If the currently selected field has already been mapped, then it will be highlighted in the drop-down list of column names in the destination data connection. Here, customername is highlighted as this is the current mapping of this field.
  5. If the currently selected field has already been mapped, an option to remove the mapping is present at the bottom of the drop-down list. Select this “Remove Selection” option to undo the current mapping.
  6. If columns in the destination data connection have already been mapped to a different source column, they will be greyed out in the drop-down list. They cannot be selected. When hovering over these, hover text telling the user which source column this destination column is mapped to comes up.
  7. Click on the eye icon to only show the source columns that have been mapped. Unmapped columns will be hidden. This can be convenient when mapping a select subset of many columns: when you think you have mapped all the needed columns, use this to only show the mapped columns and double-check they are all there. Clicking on the icon again will show all source columns again.
  8. The list of mappings can also be filtered based on the data type of the source columns. Click on this icon to bring up the checkboxes to check/uncheck the data types that should be shown. The available data types to select from are: Boolean, Date Time, Number, and Text.
  9. The list of mappings can be sorted. Click on this icon and choose one of the following three sort options:
    1. Source Table Order: orders the mappings based on the sequence of the source columns in the source table.
    2. Name (A-Z): orders the mappings in alphabetical order of the source column names.
    3. Type: orders the mappings in alphabetical order of the source column types, e.g. all Boolean columns first, then all Date Time columns, etc.

Helpful Resources

Have More Questions?

Contact Support

Get in touch

Contact Sales

Get in touch

Visit Frogger Pond Community

Visit our Community