未验证 提交 017f11f9 编写于 作者: S Srini Kadamati 提交者: GitHub

docs: Updates to Superset Site for 1.0 (#12626)

* incorporating precommit logic

* add 1.0 page

* fixed annoying docz config issue 2

* tweaked indentation

* added asf link 2

* changed Dockerhub link

* reverted frontend package lock json: precommit
上级 da63b4b0
---
name: New Drivers to Docker Image
name: Adding New Drivers in Docker
menu: Connecting to Databases
route: /docs/databases/dockeradddrivers
index: 1
version: 1
---
## Install New Database Drivers in Docker Image
## Adding New Database Drivers in Docker
Superset requires a Python database driver to be installed for each additional type of database you
want to connect to. When setting up Superset locally via `docker-compose`, the drivers and packages
......
---
name: Install Database Drivers
name: Installing Database Drivers
menu: Connecting to Databases
route: /docs/databases/installing-database-drivers
index: 0
......@@ -8,15 +8,13 @@ version: 1
## Install Database Drivers
Superset requires a Python database driver to be installed for each additional type of database you
want to connect to.
Superset requires a Python DB-API database driver and a SQLAlchemy dialect to be installed for each datastore you want to connect to.
Superset interacts with the underlying databases using the provided SQL interface (often times
through a SQLAlchemy library).
You can read more [here](/docs/databases/dockeradddrivers) about how to install new database drivers into your Superset configuration.
### Supported Databases and Dependecies
### Supported Databases and Dependencies
Superset does not ship bundled with connectivity to databases, except for Sqlite, which is part of the Python standard library. You’ll need to install the required packages for the database you want to use as your metadata database as well as the packages needed to connect to the databases you want to access through Superset.
Superset does not ship bundled with connectivity to databases, except for SQLite, which is part of the Python standard library. You’ll need to install the required packages for the database you want to use as your metadata database as well as the packages needed to connect to the databases you want to access through Superset.
A list of some of the recommended packages.
......@@ -32,7 +30,7 @@ A list of some of the recommended packages.
|[Apache Pinot](/docs/databases/pinot)|```pip install pinotdb```|```pinot+http://CONTROLLER:5436/ query?server=http://CONTROLLER:5983/```|
|[Apache Solr](/docs/databases/solr)|```pip install sqlalchemy-solr```|```solr://{username}:{password}@{hostname}:{port}/{server_path}/{collection}```
|[Apache Spark SQL](/docs/databases/spark)|```pip install pyhive```|```hive://hive@{hostname}:{port}/{database}```
|[Azure MS SQL](/docs/databases/sqlserver)||```mssql+pymssql://UserName@presetSQL:TestPassword@presetSQL.database.windows.net:1433/TestSchema```
|[Azure MS SQL](/docs/databases/sql-server)|```pip install pymssql``` |```mssql+pymssql://UserName@presetSQL:TestPassword@presetSQL.database.windows.net:1433/TestSchema```
|[Big Query](/docs/databases/bigquery)|```pip install pybigquery```|```bigquery://{project_id}```|
|[ClickHouse](/docs/databases/clickhouse)|```pip install sqlalchemy-clickhouse```|```clickhouse://{username}:{password}@{hostname}:{port}/{database}```|
|[CockroachDB](/docs/databases/cockroachdb)|```pip install cockroachdb```|```cockroachdb://root@{hostname}:{port}/{database}?sslmode=disable```|
......@@ -48,24 +46,21 @@ A list of some of the recommended packages.
|[SAP Hana](/docs/databases/hana)|```pip install hdbcli sqlalchemy-hana or pip install apache-superset[hana]```|```hana://{username}:{password}@{host}:{port}```|
|[Snowflake](/docs/databases/snowflake)|```pip install snowflake-sqlalchemy```|```snowflake://{user}:{password}@{account}.{region}/{database}?role={role}&warehouse={warehouse}```|
|SQLite||```sqlite://```|
|[SQL Server](/docs/databases/sqlserver)|```pip install pymssql```|```mssql://```|
|[SQL Server](/docs/databases/sql-server)|```pip install pymssql```|```mssql://```|
|[Teradata](/docs/databases/teradata)|```pip install sqlalchemy-teradata```|```teradata://{user}:{password}@{host}```|
|[Vertica](/docs/databases/vertica)|```pip install sqlalchemy-vertica-python```|```vertica+vertica_python://<UserName>:<DBPassword>@<Database Host>/<Database Name>```|
***
Note that many other databases are supported, the main criteria being the existence of a functional SqlAlchemy dialect and Python driver. Googling the keyword sqlalchemy in addition of a keyword that describes the database you want to connect to should get you to the right place.
Note that many other databases are supported, the main criteria being the existence of a functional
SQLAlchemy dialect and Python driver. Searching for the keyword "sqlalchemy + (database name)"
should help get you to the right place.
If your database or data engine isn't on the list but a SQL interface
exists, please file an issue on the
[Superset GitHub repo](https://github.com/apache/superset/issues), so we can work on
[Superset GitHub repo](https://github.com/apache/superset/issues), so we can work on documenting and
supporting it.
[StackOverflow](https://stackoverflow.com/questions/tagged/apache-superset+superset) and the
[Superset community Slack](https://join.slack.com/t/apache-superset/shared_invite/zt-l5f5e0av-fyYu8tlfdqbMdz_sPLwUqQ)
are great places to get help with connecting to databases in Superset.
In the end, you should be looking for a Python package compatible with your database. One part that
makes database driver installation tricky is the fact that local binaries are sometimes required in
order for them to bind properly, which means that various apt packages might need to be installed
before pip can get things set up.
......@@ -21,24 +21,26 @@ following information about each flight is given:
- Information about the origin and destination.
- The distance between the origin and destination, in kilometers (km).
### Enabling Upload a CSV Functionality
### Enabling Data Upload Functionality
You may need to enable the functionality to upload a CSV to your database. The following section
You may need to enable the functionality to upload a CSV or Excel file to your database. The following section
explains how to enable this functionality for the examples database.
In the top menu, select **Sources ‣ Databases**. Find the **examples** database in the list and
select the edit record button.
In the top menu, select **Data ‣ Databases**. Find the **examples** database in the list and
select the **Edit** button.
<img src="/images/edit-record.png" />
Within the **Edit Database** page, check the **Allow Csv Upload** checkbox. Save by selecting
**Save** at the bottom of the page.
In the resulting modal window, switch to the **Extra** tab and
tick the checkbox for **Allow Data Upload**. End by clicking the **Save** button.
<img src="/images/add-data-upload.png" />
### Loading CSV Data
Download the CSV dataset to your computer from
[Github](https://raw.githubusercontent.com/apache-superset/examples-data/master/tutorial_flights.csv).
In the Superset menu, select **Sources > Upload a CSV**.
In the Superset menu, select **Data ‣ Upload a CSV**.
<img src="/images/upload_a_csv.png" />
......@@ -54,40 +56,28 @@ Leaving all the other options in their default settings, select **Save** at the
### Table Visualization
In this section, we’ll create our first visualization: a table to show the number of flights and
cost per travel class.
To create a new chart, select **New > Chart**.
<img src="/images/add_new_chart.png" />
Once in the **Create a new chart** form, select _tutorial_flights_ from the **Chose a datasource**
dropdown.
<img src="/images/chose_a_datasource.png" />
You should now see _tutorial_flights_ as a dataset in the **Datasets** tab. Click on the entry to
launch an Explore workflow using this dataset.
Next, select the visualization type as **Table**.
In this section, we'll create a table visualization
to show the number of flights and cost per travel class.
<img src="/images/select_table_visualization_type.png" />
Then, select **Create new chart** to go into the chart view.
By default, Apache Superset only shows the last week of data: in our example, we want to look at all
the data in the dataset. No problem - within the **Time** section, remove the filter on **Time
range** by selecting **Last week** then changing the selection to **No filter**, with a final **OK**
to confirm your selection.
By default, Apache Superset only shows the last week of data. In our example, we want to visualize all
of the data in the dataset. Click the **Time ‣ Time Range** section and change
the **Range Type** to **No Filter**.
<img src="/images/no_filter_on_time_filter.png" />
Click **Apply** to save.
Now, we want to specify the rows in our table by using the **Group by** option. Since in this
example, we want to understand different Travel Classes, we select **Travel Class** in this menu.
Next, we can specify the metrics we would like to see in our table with the **Metrics**option.
Count(\*), which represents the number of rows in the table (in this case corresponding to the
number of flights since we have a row per flight), is already there. To add cost, within
**Metrics**, select **Cost**.
**Save** the default aggregation option, which is to sum the column.
- `COUNT(*)`, which represents the number of rows in the table
(in this case, quantity of flights in each Travel Class)
- `SUM(Cost)`, which represents the total cost spent by each Travel Class
<img src="/images/sum_cost_column.png" />
......@@ -95,12 +85,13 @@ Finally, select **Run Query** to see the results of the table.
<img src="/images/tutorial_table.png" />
Congratulations, you have created your first visualization in Apache Superset!
To save the visualization, click on **Save** in the top left of the screen. In the following modal,
To save the visualization, click on **Save** in the top left of the screen. Select the ** Save as**
option, and enter the chart name as Tutorial Table (you will be able to find it again through the
**Charts** screen, accessible in the top menu). Similarly, select **Add to new dashboard** and enter
Tutorial Dashboard. Finally, select **Save & go to dashboard**.
- Select the ** Save as**
option and enter the chart name as Tutorial Table (you will be able to find it again through the
**Charts** screen, accessible in the top menu).
- Select **Add To Dashboard** and enter
Tutorial Dashboard. Finally, select **Save & Go To Dashboard**.
<img src="/images/save_tutorial_table.png" />
......@@ -124,10 +115,12 @@ In this section, we will extend our analysis using a more complex visualization,
end of this section, you will have created a table that shows the monthly spend on flights for the
first six months, by department, by travel class.
As before, create a new visualization by selecting **New > Chart** on the top menu. Choose
Create a new chart by selecting **+ ‣ Chart** from the top right corner. Choose
tutorial_flights again as a datasource, then click on the visualization type to get to the
visualization menu. Select the **Pivot Table** visualization (you can filter by entering text in the
search box) and then **Create a new chart**.
search box) and then **Create New Chart**.
<img src="/images/create_pivot.png" />
In the **Time** section, keep the Time Column as Travel Date (this is selected automatically as we
only have one time column in our dataset). Then select Time Grain to be month as having daily data
......@@ -151,22 +144,18 @@ see some data!
<img src="/images/tutorial_pivot_table.png" />
You should see months in the rows and Department and Travel Class in the columns. To get this in our
dashboard, select Save, name the chart Tutorial Pivot and using **Add chart to existing dashboard**
select **Tutorial Dashboard**, and then finally **Save & go to dashboard**.
You should see months in the rows and Department and Travel Class in the columns. Publish this chart
to your existing Tutorial Dashboard you created earlier.
### Line Chart
In this section, we are going to create a line chart to understand the average price of a ticket by
month across the entire dataset. As before, select **New > Chart**, and then tutorial_flights as the
datasource and Line Chart as the visualization type.
month across the entire dataset.
In the Time section, as before, keep the Time Column as Travel Date and Time Grain as month but this
time for the Time range select No filter as we want to look at entire dataset.
Within Metrics, remove the default COUNT(\*) and add Cost. This time, we want to change how this
column is aggregated to show the mean value: we can do this by selecting AVG in the aggregate
dropdown.
Within Metrics, remove the default `COUNT(*)` metric and instead add `AVG(Cost)`, to show the mean value.
<img src="/images/average_aggregate_for_cost.png" />
......@@ -187,8 +176,7 @@ and Y Axis Label.
<img src="/images/tutorial_line_chart.png" />
Once you’re done, Save as Tutorial Line Chart, use **Add chart to existing dashboard** to add this
chart to the previous ones on the Tutorial Dashboard and then **Save & go to dashboard**.
Once you’re done, publish the chart in your Tutorial Dashboard.
### Markup
......@@ -216,8 +204,8 @@ To exit, select any other part of the dashboard. Finally, don’t forget to keep
In this section, you will learn how to add a filter to your dashboard. Specifically, we will create
a filter that allows us to look at those flights that depart from a particular country.
A filter box visualization can be created as any other visualization by selecting **New > Chart**,
and then tutorial_flights as the datasource and Filter Box as the visualization type.
A filter box visualization can be created as any other visualization by selecting **+ ‣ Chart**,
and then _tutorial_flights_ as the datasource and Filter Box as the visualization type.
First of all, in the **Time** section, remove the filter from the Time range selection by selecting
No filter.
......
......@@ -8,8 +8,10 @@ version: 1
## Creating Your First Dashboard
This section is focused on documentation for end-users (data analysts, business analysts, data
scientists, etc.). In addition to this site, Preset.io maintains an upto-date set of end-user
This section is focused on documentation for end-users who will be using Superset
for the data analysis and exploration workflow
(data analysts, business analysts, data
scientists, etc). In addition to this site, [Preset.io](http://preset.io/) maintains an updated set of end-user
documentation at [docs.preset.io](https://docs.preset.io/).
This tutorial targets someone who wants to create charts and dashboards in Superset. We’ll show you
......@@ -19,258 +21,153 @@ feel for the end-to-end user experience.
### Connecting to a new database
We assume you already have a database configured and can connect to it from the instance on which
you’re running Superset. If you’re just testing Superset and want to explore sample data, you can
load some sample PostgreSQL datasets into a fresh DB, or configure the
[example weather data](https://github.com/dylburger/noaa-ghcn-weather-data) we use here.
Superset itself doesn't have a storage layer to store your data but instead pairs with
your existing SQL-speaking database or data store.
Under the **Sources** menu, select the _Databases_ option:
First things first, we need to add the connection credentials to your database to be able
to query and visualize data from it. If you're using Superset locally via
[Docker compose](/docs/installation/installing-superset-using-docker-compose), you can
skip this step because a Postgres database, named **examples**, is included and
pre-configured in Superset for you.
<img src="/images/tutorial_01_sources_database.png" />{' '}
Under the **Data** menu, select the _Databases_ option:
On the resulting page, click on the green plus sign, near the top right:
<img src="/images/tutorial_01_sources_database.png" />{' '} <br/><br/>
<img src="/images/tutorial_02_add_database.png" />{' '}
Next, click the green **+ Database** button in the top right corner:
You can configure a number of advanced options on this page, but for this walkthrough, you’ll only
need to do **two things**:
<img src="/images/tutorial_02_add_database.png" />{' '} <br/><br/>
1. Name your database connection:
You can configure a number of advanced options in this window, but for this walkthrough you only
need to specify two things (the database name and SQLAlchemy URI):
<img src="/images/tutorial_03_database_name.png" />
Provide the SQLAlchemy Connection URI and test the connection:
<img src="/images/tutorial_04_sqlalchemy_connection_string.png" />
This example shows the connection string for our test weather database. As noted in the text below
As noted in the text below
the URI, you should refer to the SQLAlchemy documentation on
[creating new connection URIs](https://docs.sqlalchemy.org/en/12/core/engines.html#database-urls)
for your target database.
Click the **Test Connection** button to confirm things work end to end. Once Superset can
successfully connect and authenticate, you should see a popup like this:
<img src="/images/tutorial_05_connection_popup.png" />
Click the **Test Connection** button to confirm things work end to end. If the connection looks good, save the configuration
by clicking the **Add** button in the bottom right corner of the modal window:
Moreover, you should also see the list of tables Superset can read from the schema you’re connected
to, at the bottom of the page:
<img src="/images/tutorial_04_add_button.png" />
<img src="/images/tutorial_06_list_of_tables.png" />
Congratulations, you've just added a new data source in Superset!
If the connection looks good, save the configuration by clicking the **Save** button at the bottom
of the page:
### Registering a new table
<img src="/images/tutorial_07_save_button.png" />
Now that you’ve configured a data source, you can select specific tables (called **Datasets** in Superset)
that you want exposed in Superset for querying.
### Adding a new table
Now that you’ve configured a database, you’ll need to add specific tables to Superset that you’d
like to query.
Under the **Sources** menu, select the _Tables_ option:
Navigate to **Data ‣ Datasets** and select the **+ Dataset** button in the top right corner.
<img src="/images/tutorial_08_sources_tables.png" />
On the resulting page, click on the green plus sign, near the top left:
A modal window should pop up in front of you. Select your **Database**,
**Schema**, and **Table** using the drop downs that appear. In the following example,
we register the **cleaned_sales_data** table from the **examples** database.
<img src="/images/tutorial_09_add_new_table.png" />
You only need a few pieces of information to add a new table to Superset:
- The name of the table
<img src="/images/tutorial_10_table_name.png" />
- The target database from the **Database** drop-down menu (i.e. the one you just added above)
<img src="/images/tutorial_11_choose_db.png" />
- Optionally, the database schema. If the table exists in the “default” schema (e.g. the public
schema in PostgreSQL or Redshift), you can leave the schema field blank.
Click on the **Save** button to save the configuration:
<img src="/images/tutorial_07_save_button.png" />
When redirected back to the list of tables, you should see a message indicating that your table was
created:
<img src="/images/tutorial_12_table_creation_success_msg.png" />
This message also directs you to edit the table configuration. We’ll edit a limited portion of the
configuration now - just to get you started - and leave the rest for a more advanced tutorial.
Click on the edit button next to the table you’ve created:
<img src="/images/tutorial_13_edit_table_config.png" />
On the resulting page, click on the **List Table Column** tab. Here, you’ll define the way you can
use specific columns of your table when exploring your data. We’ll run through these options to
describe their purpose:
If you want users to group metrics by a specific field, mark it as **Groupable**.
If you need to filter on a specific field, mark it as **Filterable**.
Is this field something you’d like to get the distinct count of? Check the **Count Distinct** box.
Is this a metric you want to sum, or get basic summary statistics for? The **Sum, Min**, and **Max**
columns will help.
The **is temporal** field should be checked for any date or time fields. We’ll cover how this
manifests itself in analyses in a moment.
To finish, click the **Add** button in the bottom right corner. You should now see your dataset in the list of datasets.
Here’s how we’ve configured fields for the weather data. Even for measures like the weather
measurements (precipitation, snowfall, etc.), it’s ideal to group and filter by these values:
### Customizing column properties
<img src="/images/tutorial_14_field_config.png" />
Now that you've registered your dataset, you can configure column properties
for how the column should be treated in the Explore workflow:
As with the configurations above, click the **Save** button to save these settings.
- Is the column temporal? (should it be used for slicing & dicing in time series charts?)
- Should the column be filterable?
- Is the column dimensional?
- If it's a datetime column, how should Superset parse
the datetime format? (using the [ISO-8601 string pattern](https://en.wikipedia.org/wiki/ISO_8601))
### Exploring your data
<img src="/images/tutorial_column_properties.png" />
To start exploring your data, simply click on the table name you just created in the list of
available tables:
### Superset semantic layer
<img src="/images/tutorial_15_click_table_name.png" />
Superset has a thin semantic layer that adds many quality of life improvements for analysts.
The Superset semantic layer can store 2 types of computed data:
By default, you’ll be presented with a Table View:
1. Virtual metrics: you can write SQL queries that aggregate values
from multiple column (e.g. `SUM(recovered) / SUM(confirmed)`) and make them
available as columns for (e.g. `recovery_rate`) visualization in Explore.
Agggregate functions are allowed and encouraged for metrics.
<img src="/images/tutorial_16_datasource_chart_type.png" />
<img src="/images/tutorial_sql_metric.png" />
Let’s walk through a basic query to get the count of all records in our table. First, we’ll need to
change the **Since** filter to capture the range of our data. You can use simple phrases to apply
these filters, like “3 years ago”:
You can also certify metrics if you'd like for your team in this view.
<img src="/images/tutorial_17_choose_time_range.png" />
2. Virtual calculated columns: you can write SQL queries that
customize the appearance and behavior
of a specific column (e.g. `CAST(recovery_rate) as float`).
Aggregate functions aren't allowed in calculated columns.
The upper limit for time, the **Until** filter, defaults to “now”, which may or may not be what you
want. Look for the Metrics section under the **GROUP BY** header, and start typing “Count” - you’ll
see a list of metrics matching what you type:
<img src="/images/tutorial_calculated_column.png" />
<img src="/images/tutorial_18_choose_metric.png" />
### Creating charts in Explore view
Select the _COUNT(\*)_ metric, then click the green **Query** button near the top of the explore:
Superset has 2 main interfaces for exploring data:
<img src="/images/tutorial_19_click_query.png" />
- **Explore**: no-code viz builder. Select your dataset, select the chart,
customize the appearance, and publish.
- **SQL Lab**: SQL IDE for cleaning, joining, and preparing data for Explore workflow
You’ll see your results in the table:
We'll focus on the Explore view for creating charts right now.
To start the Explore workflow from the **Datasets** tab, start by clicking the name
of the dataset that will be powering your chart.
<img src="/images/tutorial_20_count_star_result.png" />
<img src="/images/tutorial_launch_explore.png" /><br/><br/>
Let’s group this by the weather_description field to get the count of records by the type of weather
recorded by adding it to the Group by section:
You're now presented with a powerful workflow for exploring data and iterating on charts.
<img src="/images/tutorial_21_group_by.png" />
- The **Dataset** view on the left-hand side has a list of columns and metrics,
scoped to the current dataset you selected.
- The **Data** preview below the chart area also gives you helpful data context.
- Using the **Data** tab and **Customize** tabs, you can change the visualization type,
select the temporal column, select the metric to group by, and customize
the aesthetics of the chart.
and run the query:
As you customize your chart using drop-down menus, make sure to click the **Run** button
to get visual feedback.
<img src="/images/tutorial_22_group_by_result.png" />
<img src="/images/tutorial_explore_run.jpg" />
Let’s find a more useful data point: the top 10 times and places that recorded the highest
temperature in 2015. We replace weather_description with latitude, longitude and measurement_date in
the **Group by** section:
In the following screenshot, we craft a grouped Time-series Bar Chart to visualize
our quarterly sales data by product line just be clicking options in drop-down menus.
<img src="/images/tutorial_23_group_by_more_dimensions.png" />
And replace _COUNT(\*)_ with _max\_\_measurement_flag_:
<img src="/images/tutorial_24_max_metric.png" />
The _max\_\_measurement_flag_ metric was created when we checked the box under **Max** and next to
the _measurement_flag_ field, indicating that this field was numeric and that we wanted to find its
maximum value when grouped by specific fields.
In our case, _measurement_flag_ is the value of the measurement taken, which clearly depends on the
type of measurement (the researchers recorded different values for precipitation and temperature).
Therefore, we must filter our query only on records where the _weather_description_ is equal to
“Maximum temperature”, which we do in the **Filters** section at the bottom of the explore:
<img src="/images/tutorial_25_max_temp_filter.png" />
Finally, since we only care about the top 10 measurements, we limit our results to 10 records using
the Row _limit_ option under the **Options** header:
<img src="/images/tutorial_26_row_limit.png" />
We click **Query** and get the following results:
<img src="/images/tutorial_27_top_10_max_temps.png" />
In this dataset, the maximum temperature is recorded in tenths of a degree Celsius. The top value of
1370, measured in the middle of Nevada, is equal to 137 C, or roughly 278 degrees F. It’s unlikely
this value was correctly recorded. We’ve already been able to investigate some outliers with
Superset, but this just scratches the surface of what we can do.
You may want to do a couple more things with this measure:
- The default formatting shows values like 1.37k, which may be difficult for some users to read.
It’s likely you may want to see the full, comma-separated value. You can change the formatting of
any measure by editing its config (**Edit Table Config > List Sql Metric > Edit Metric >
D3Format**)
= Moreover, you may want to see the temperature measurements in plain degrees C, not tenths of a
degree. Or you may want to convert the temperature to degrees Fahrenheit. You can change the SQL
that gets executed against the database, baking the logic into the measure itself (**Edit Table
Config > List Sql Metric > Edit Metric > SQL Expression**).
For now, though, let’s create a better visualization of these data and add it to a dashboard. We can
change the Chart Type to “Distribution - Bar Chart”:
<img src="/images/tutorial_28_bar_chart.png" />
Our filter on Maximum temperature measurements was retained, but the query and formatting options
are dependent on the chart type, so you’ll have to set the values again:
<img src="/images/tutorial_29_bar_chart_series_metrics.png" />
You should note the extensive formatting options for this chart: the ability to set axis labels,
margins, ticks, etc. To make the data presentable to a broad audience, you’ll want to apply many of
these to slices that end up in dashboards. For now, though, we run our query and get the following
chart:
<img src="/images/tutorial_30_bar_chart_results.png" />
<img src="/images/tutorial_explore_settings.jpg" />
### Creating a slice and dashboard
This view might be interesting to researchers, so let’s save it. In Superset, a saved query is
called a **Slice**.
To create a slice, click the **Save as** button near the top-left of the explore:
<img src="/images/tutorial_19_click_query.png" />
A popup should appear, asking you to name the slice, and optionally add it to a dashboard. Since we
haven’t yet created any dashboards, we can create one and immediately add our slice to it. Let’s do
it:
<img src="/images/tutorial_31_save_slice_to_dashboard.png" />
To save your chart, first click the **Save** button. You can either:
Click **Save**, which will direct you back to your original query. We see that our slice and
dashboard were successfully created:
- Save your chart and add it to an existing dashboard
- Save your chart and add it to a new dashboard
<img src="/images/tutorial_32_save_slice_confirmation.png" />
In the following screenshot, we save the chart to a new "Superset Duper Sales Dashboard":
Let’s check out our new dashboard. We click on the **Dashboards** menu:
<img src="/images/tutorial_save_slice.png" />
<img src="/images/tutorial_33_dashboard.png" />
To publish, click **Save and goto Dashboard**.
and find the dashboard we just created:
Behind the scenes, Superset will create a slice and store all the information needed
to create your chart in its thin data layer
(the query, chart type, options selected, name, etc).
<img src="/images/tutorial_34_weather_dashboard.png" />
<img src="/images/tutorial_first_dashboard.png" />
Things seemed to have worked - our slice is here!
To resize the chart, start by clicking the pencil button in the top right corner.
<img src="/images/tutorial_35_slice_on_dashboard.png" />
<img src="/images/tutorial_pencil_edit.png" />
But it’s a bit smaller than we might like. Luckily, you can adjust the size of slices in a dashboard
by clicking, holding and dragging the bottom-right corner to your desired dimensions:
Then, click and drag the bottom right corner of the chart until the chart layout snaps
into a position you like onto the underlying grid.
<img src="/images/tutorial_36_adjust_dimensions.gif" />
<img src="/images/tutorial_chart_resize.png" />
After adjusting the size, you’ll be asked to click on the icon near the top-right of the dashboard
to save the new configuration.
Click **Save** to persist the changes.
Congrats! You’ve successfully linked, analyzed, and visualized data in Superset. There are a wealth
of other table configuration and visualization options, so please start exploring and creating
......
......@@ -10,20 +10,30 @@ is fast, lightweight, intuitive, and loaded with options that make it easy for u
sets to explore and visualize their data, from simple pie charts to highly detailed deck.gl
geospatial charts.
Here's an overview of the key features of Superset:
- A rich set of data visualizations out of the box
- An easy-to-use interface for exploring and visualizing data
- Create and share dashboards
- Enterprise-ready authentication with integration with major authentication providers (database,
OpenID, LDAP, OAuth & REMOTE_USER through Flask AppBuilder)
- An extensible, high-granularity security/permission model allowing intricate rules on who can
access individual features and the dataset
- A simple semantic layer, allowing users to control how data sources are displayed in the UI by
defining which fields should show up in which drop-down and which aggregation and function metrics
are made available to the user
- Integration with most SQL-speaking RDBMS through SQLAlchemy
- Deep integration with Druid.io
Here are a **few different ways you can get started with Superset**:
- Download the [source from Apache Foundation's website](https://dist.apache.org/repos/dist/release/superset/1.0.0/)
- Download the latest Superset version from [Pypi here](https://pypi.org/project/apache-superset/)
- Setup Superset locally with one command
using [Docker Compose](docs/installation/installing-superset-using-docker-compose)
- Download the [Docker image](https://hub.docker.com/r/apache/superset) from Dockerhub
- Install bleeding-edge master version of Superset
[from Github](https://github.com/apache/superset/tree/master/superset)
Superset provides:
- An intuitive interface for visualizing datasets and crafting interactive dashboards
- A wide array of beautiful visualizations to showcase your data
- Code-free visualization builder to extract and present datasets
- A world-class SQL IDE for preparing data for visualization, including a rich metadata browser
- A lightweight semantic layer which empowers data analysts to quickly define custom dimensions and metrics
- Out-of-the-box support for most SQL-speaking databases
- Seamless, in-memory asynchronous caching and queries
- An extensible security model that allows configuration of very intricate rules on on who can access which product features and datasets.
- Integration with major authentication backends (database, OpenID, LDAP, OAuth, REMOTE_USER, etc)
- The ability to add custom visualization plugins
- An API for programmatic customization
- A cloud-native archiecture designed from the ground up for scale
Superset is cloud-native and designed to be highly available. It was designed to scale out to large,
distributed environments and works very well inside containers. While you can easily test drive
......
---
name: "Superset One"
title: "Superset One"
route: /docs/version-one
---
## Superset 1.0
Apache Superset 1.0 is a major milestone that the community has been working towards since the
very first commit at a hackathon at Airbnb back in 2015. Superset 1.0 packs a lot of new features,
uplevels usability, holds a higher quality standard, and raises the bar for releases to come.
This page chronicles the key advancements that our community has been building up towards this release.
While growing fast over the past four years, Superset had accumulated a certain amount of technical debt,
design debt, bugs, and idiosyncrasies. For this release, we wanted to pay the bulk of that debt off,
streamlining the core user flows, refreshing the overall look and feel, taking off some of the
scaffolding that was left standing around, and more generally, leveling up the user experience.
## User Experience
Visually, Superset 1.0 is stunning, introducing card layouts with thumbnails throughout the application,
streamlining navigation and content discovery with a new home page, redesigned menus,
and generally enriching existing pages.
<img src="/images/dashboard_card_view.jpg" />
Behind the scenes, we moved away from Bootstrap 2x in favor of building a
proper design system on top of Ant Design.
We also redesigned all of our CRUD (Create Read Update Delete), moving away
from the rigid scaffolding “auto-magic” provided by FAB (Flask App Builder),
to our own React-based solution that enables us to build richer experiences.
<img src="/images/explore_ui.jpg" />
More generally,
many rough edges got buffed, the whole product got polished,
and we managed to get our core user flows to, well, flow nicely.
## API
For engineers and hackers, we’ve made Superset much more modular,
extensible and integratable. We’re now exposing the building blocks of Superset
for engineers to extend or use in other projects. It’s now easier than ever to
create new visualization plugins for Superset and to share those plugins back with the community.
We’re excited by the possibilities that this opens and excited to observe a growing ecosystem of
plugins take life. We’ve also formalized a [public REST API](/docs/rest-api) that enables engineers to essentially
do everything that users can do in Superset, programmatically.
## Honorable Mentions
With 1680 PRs merged and 25+ SIPs (Superset Improvement Proposals) over 2020, it’s hard
to summarize what went into this release. Improvements happened in all aspects
of the project, from infrastructure to design, through backend and frontend, to community and
governance. Here are some honorable mentions that we haven’t been covered above,
but deserve a mention in this post:
- Asynchronous backend improvements
- Metadata and data pane in explorer view
- Toolbars redesign (SQL Lab, dashboard, explore)
- Date range picker redesign
- Various Docker / Helm improvements
- Migration of key visualization to plugins using Echarts
- Time series forecasting leveraging the Prophet library
- Improvements to and extensive use of our feature flag framework
- Improved analytics logging, capturing more events more consistently
- Exploration control panels improvements
- Improved SQL-to-explore flows
## Start Using Superset 1.0
**Release Notes**
To digest the full set of changes in 1.0, we recommend reading the
[full Release Notes](https://github.com/apache/superset/tree/master/RELEASING/release-notes-1-0)
on Github.
**Source Code**
You can download the official ASF release for 1.0 from
their [website here](https://github.com/apache/superset/tree/master/superset).
......@@ -428,13 +428,13 @@ const Theme = () => {
</div>
<Carousel ref={slider} effect="scrollx" afterChange={onChange}>
<div className="imageContainer">
<img src="/images/explorer.png" alt="" />
<img src="/images/explorer5.jpg" alt="" />
</div>
<div className="imageContainer">
<img src="/images/dashboard3.png" alt="" />
</div>
<div className="imageContainer">
<img src="/images/sqllab1.png" alt="" />
<img src="/images/sqllab5.jpg" alt="" />
</div>
</Carousel>
</div>
......
docs/static/images/annotation.png

99.4 KB | W: | H:

docs/static/images/annotation.png

258.1 KB | W: | H:

docs/static/images/annotation.png
docs/static/images/annotation.png
docs/static/images/annotation.png
docs/static/images/annotation.png
  • 2-up
  • Swipe
  • Onion skin
docs/static/images/edit-record.png

4.8 KB | W: | H:

docs/static/images/edit-record.png

42.4 KB | W: | H:

docs/static/images/edit-record.png
docs/static/images/edit-record.png
docs/static/images/edit-record.png
docs/static/images/edit-record.png
  • 2-up
  • Swipe
  • Onion skin
docs/static/images/resample.png

86.4 KB | W: | H:

docs/static/images/resample.png

363.9 KB | W: | H:

docs/static/images/resample.png
docs/static/images/resample.png
docs/static/images/resample.png
docs/static/images/resample.png
  • 2-up
  • Swipe
  • Onion skin
docs/static/images/rolling_mean.png

97.4 KB | W: | H:

docs/static/images/rolling_mean.png

370.2 KB | W: | H:

docs/static/images/rolling_mean.png
docs/static/images/rolling_mean.png
docs/static/images/rolling_mean.png
docs/static/images/rolling_mean.png
  • 2-up
  • Swipe
  • Onion skin
docs/static/images/upload_a_csv.png

37.3 KB | W: | H:

docs/static/images/upload_a_csv.png

102.4 KB | W: | H:

docs/static/images/upload_a_csv.png
docs/static/images/upload_a_csv.png
docs/static/images/upload_a_csv.png
docs/static/images/upload_a_csv.png
  • 2-up
  • Swipe
  • Onion skin
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册