Write data to InfluxDB
Discover what you’ll need to write data into InfluxDB OSS (open source). Learn how to quickly start collecting data, and then explore ways to write data, best practices, and what we recommend if you’re migrating a large amount of historical data.
- What you’ll need
- Quickly start collecting data
- Load data from sources in the InfluxDB UI
- Use no-code solutions
- Use developer tools
- Best practices for writing data
- Next steps
What you’ll need
To write data into InfluxDB, you need the following:
- organization – See View organizations for instructions on viewing your organization ID.
- bucket – See View buckets for instructions on viewing your bucket ID.
- authentication token – See View tokens for instructions on viewing your authentication token.
- InfluxDB URL – See InfluxDB URLs.
The InfluxDB setup process creates each of these.
Use line protocol format to write data into InfluxDB. Each line represents a data point. Each point requires a measurement and field set and may also include a tag set and a timestamp.
Line protocol data looks like this:
mem,host=host1 used_percent=23.43234543 1556892576842902000
cpu,host=host1 usage_user=3.8234,usage_system=4.23874 1556892726597397000
mem,host=host1 used_percent=21.83599203 1556892777007291000
Timestamp precision
When writing data to InfluxDB, we recommend including a timestamp with each point. If a data point does not include a timestamp when it is received by the database, InfluxDB uses the current system time (UTC) of its host machine.
The default precision for timestamps is in nanoseconds. If the precision of the timestamps is anything other than nanoseconds (ns
), you must specify the precision in your write request. InfluxDB accepts the following precisions:
ns
- Nanosecondsus
- Microsecondsms
- Millisecondss
- Seconds
For more details about line protocol, see the Line protocol reference and Best practices for writing data.
Quickly start collecting data
Familiarize yourself with querying, visualizing, and processing data in InfluxDB Cloud and InfluxDB OSS by collecting data right away. The following options are available:
Quick Start for InfluxDB OSS
Select Quick Start in the last step of the InfluxDB user interface’s (UI) setup process to quickly start collecting data with InfluxDB. Quick Start creates a data scraper that collects metrics from the InfluxDB /metrics
endpoint. The scraped data provides a robust dataset of internal InfluxDB metrics that you can query, visualize, and process.
Use Quick Start to collect InfluxDB metrics
After setting up InfluxDB v2.0, the “Let’s start collecting data!” page displays options for collecting data. Click Quick Start.
InfluxDB creates and configures a new scraper. The target URL points to the /metrics
HTTP endpoint of your local InfluxDB instance (for example, http://localhost:8086/metrics
), which outputs internal InfluxDB metrics in the Prometheus data format. The scraper stores the scraped metrics in the bucket created during the initial setup process.
Quick Start is only available in the last step of the setup process. If you missed the Quick Start option, you can manually create a scraper that scrapes data from the /metrics
endpoint.
Sample data
Use sample data sets to quickly populate InfluxDB with sample time series data.
Next steps
With your data in InfluxDB, you’re ready to do one or more of the following:
Query and explore your data
Query data using Flux, the UI, and the influx
command line interface. See Query data.
Process your data
Use InfluxDB tasks to process and downsample data. See Process data.
Visualize your data
Build custom dashboards to visualize your data. See Visualize data.
Monitor your data and send alerts
Monitor your data and sends alerts based on specified logic. See Monitor and alert.