Getting Started
Contents
Getting Started¶
The pins package helps you publish data sets, models, and other Python objects, making it easy to share them across projects and with your colleagues. You can pin objects to a variety of “boards”, including local folders (to share on a networked drive or with DropBox), RStudio connect, Amazon S3, Google Cloud Storage, Azure Datalake, and more. This vignette will introduce you to the basics of pins.
from pins import board_local, board_folder, board_temp, board_urls
Getting started¶
Every pin lives in a pin board, so you must start by creating a pin board. In this vignette I’ll use a temporary board which is automatically deleted when your Python session is over:
board = board_temp()
In real life, you’d pick a board depending on how you want to share the data. Here are a few options:
board = board_local() # share data across R sessions on the same computer
board = board_folder("~/Dropbox") # share data with others using dropbox
board = board_folder("Z:\\my-team\pins") # share data using a shared network drive
board = board_rsconnect() # share data with RStudio Connect
Reading and writing data¶
Once you have a pin board, you can write data to it with the .pin_write()
method:
from pins.data import mtcars
meta = board.pin_write(mtcars, "mtcars", type="csv")
Writing pin:
Name: 'mtcars'
Version: 20221220T200607Z-3b134
The first argument is the object to save (usually a data frame, but it can be any Python object), and the second argument gives the “name” of the pin. The name is basically equivalent to a file name; you’ll use it when you later want to read the data from the pin. The only rule for a pin name is that it can’t contain slashes.
Above, we saved the data as a CSV, but depending on what you’re saving and who else you want to read it, you might use the But you can choose another option depending on your goals:
type = "csv"
usesto_csv()
from pandas to create a.csv
file. CSVs can read by any application, but only support simple columns (e.g. numbers, strings, dates), can take up a lot of disk space, and can be slow to read.type = "joblib"
usesjoblib.dump()
to create a binary python data file. See the joblib docs for more information.type = "arrow"
usespyarrow
to create an arrow/feather file. Arrow is a modern, language-independent, high-performance file format designed for data science. Not every tool can read arrow files, but support is growing rapidly.type = "json"
usesjson.dump()
to create a.json
file. Pretty much every programming language can read json files, but they only work well for nested lists.
After you’ve pinned an object, you can read it back with pin_read()
:
board.pin_read("mtcars")
mpg | cyl | disp | hp | drat | wt | qsec | vs | am | gear | carb | |
---|---|---|---|---|---|---|---|---|---|---|---|
0 | 21.0 | 6 | 160.0 | 110 | 3.90 | 2.620 | 16.46 | 0 | 1 | 4 | 4 |
1 | 21.0 | 6 | 160.0 | 110 | 3.90 | 2.875 | 17.02 | 0 | 1 | 4 | 4 |
2 | 22.8 | 4 | 108.0 | 93 | 3.85 | 2.320 | 18.61 | 1 | 1 | 4 | 1 |
3 | 21.4 | 6 | 258.0 | 110 | 3.08 | 3.215 | 19.44 | 1 | 0 | 3 | 1 |
4 | 18.7 | 8 | 360.0 | 175 | 3.15 | 3.440 | 17.02 | 0 | 0 | 3 | 2 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
27 | 30.4 | 4 | 95.1 | 113 | 3.77 | 1.513 | 16.90 | 1 | 1 | 5 | 2 |
28 | 15.8 | 8 | 351.0 | 264 | 4.22 | 3.170 | 14.50 | 0 | 1 | 5 | 4 |
29 | 19.7 | 6 | 145.0 | 175 | 3.62 | 2.770 | 15.50 | 0 | 1 | 5 | 6 |
30 | 15.0 | 8 | 301.0 | 335 | 3.54 | 3.570 | 14.60 | 0 | 1 | 5 | 8 |
31 | 21.4 | 4 | 121.0 | 109 | 4.11 | 2.780 | 18.60 | 1 | 1 | 4 | 2 |
32 rows × 11 columns
You don’t need to supply the file type when reading data from a pin because pins automatically stores the file type in the metadata, the topic of the next section.
Note that when the data lives elsewhere, pins takes care of downloading and caching so that it’s only re-downloaded when needed. That said, most boards transmit pins over HTTP, and this is going to be slow and possibly unreliable for very large pins. As a general rule of thumb, we don’t recommend using pins with files over 500 MB. If you find yourself routinely pinning data larger that this, you might need to reconsider your data engineering pipeline.
Note
If you are using the RStudio Connect board (board_rsconnect
), then you must specify your pin name as
<user_name>/<content_name>
. For example, hadley/sales-report
.
Metadata¶
Every pin is accompanied by some metadata that you can access with pin_meta()
:
board.pin_meta("mtcars")
Meta(title='mtcars: a pinned 32 x 11 DataFrame', description=None, created='20221220T200607Z', pin_hash='3b134bae183b50c9', file='mtcars.csv', file_size=1333, type='csv', api_version=1, version=Version(created=datetime.datetime(2022, 12, 20, 20, 6, 7), hash='3b134'), tags=None, name='mtcars', user={}, local={})
This shows you the metadata that’s generated by default. This includes:
title
, a brief textual description of the dataset.an optional
description
, where you can provide more details.the date-time when the pin was
created
.the
file_size
, in bytes, of the underlying files.a unique
pin_hash
that you can supply topin_read()
to ensure that you’re reading exactly the data that you expect.
When creating the pin, you can override the default description or provide additional metadata that is stored with the data:
board.pin_write(
mtcars,
name="mtcars2",
type="csv",
description = "Data extracted from the 1974 Motor Trend US magazine, and comprises fuel consumption and 10 aspects of automobile design and performance for 32 automobiles (1973–74 models).",
metadata = {
"source": "Henderson and Velleman (1981), Building multiple regression models interactively. Biometrics, 37, 391–411."
}
)
Writing pin:
Name: 'mtcars2'
Version: 20221220T200607Z-3b134
Meta(title='mtcars2: a pinned 32 x 11 DataFrame', description='Data extracted from the 1974 Motor Trend US magazine, and comprises fuel consumption and 10 aspects of automobile design and performance for 32 automobiles (1973–74 models).', created='20221220T200607Z', pin_hash='3b134bae183b50c9', file='mtcars2.csv', file_size=1333, type='csv', api_version=1, version=Version(created=datetime.datetime(2022, 12, 20, 20, 6, 7, 235063), hash='3b134bae183b50c9'), tags=None, name='mtcars2', user={'source': 'Henderson and Velleman (1981), Building multiple regression models interactively. Biometrics, 37, 391–411.'}, local={})
board.pin_meta("mtcars")
Meta(title='mtcars: a pinned 32 x 11 DataFrame', description=None, created='20221220T200607Z', pin_hash='3b134bae183b50c9', file='mtcars.csv', file_size=1333, type='csv', api_version=1, version=Version(created=datetime.datetime(2022, 12, 20, 20, 6, 7), hash='3b134'), tags=None, name='mtcars', user={}, local={})
While we’ll do our best to keep the automatically generated metadata consistent over time, I’d recommend manually capturing anything you really care about in metadata.
Versioning¶
Every pin_write()
will create a new version:
board2 = board_temp()
board2.pin_write([1,2,3,4,5], name = "x", type = "json")
board2.pin_write([1,2,3], name = "x", type = "json")
board2.pin_write([1,2], name = "x", type = "json")
board2.pin_versions("x")
Writing pin:
Name: 'x'
Version: 20221220T200607Z-2bc5d
Writing pin:
Name: 'x'
Version: 20221220T200607Z-c24c0
Writing pin:
Name: 'x'
Version: 20221220T200607Z-91d9a
created | hash | version | |
---|---|---|---|
0 | 2022-12-20 20:06:07 | 2bc5d | 20221220T200607Z-2bc5d |
1 | 2022-12-20 20:06:07 | 91d9a | 20221220T200607Z-91d9a |
2 | 2022-12-20 20:06:07 | c24c0 | 20221220T200607Z-c24c0 |
By default, pin_read()
will return the most recent version:
board2.pin_read("x")
[1, 2, 3]
But you can request an older version by supplying the version
argument:
version = board2.pin_versions("x").version[1]
board2.pin_read("x", version = version)
[1, 2]
Storing models¶
⚠️: Warning the examples in this section use joblib to read and write data. Joblib uses the pickle format, and pickle files are not secure. Only read pickle files you trust. In order to read pickle files, set the
allow_pickle_read=True
argument. See: https://docs.python.org/3/library/pickle.html.
You can write a pin with type="joblib"
to store arbitrary python objects, including fitted models from packages like scikit-learn.
For example, suppose you wanted to store a custom namedtuple
object.
from collections import namedtuple
board3 = board_temp(allow_pickle_read=True)
Coords = namedtuple("Coords", ["x", "y"])
coords = Coords(1, 2)
coords
Coords(x=1, y=2)
Using type="joblib"
lets you store and read back the custom coords
object.
board3.pin_write(coords, "my_coords", type="joblib")
board3.pin_read("my_coords")
Writing pin:
Name: 'my_coords'
Version: 20221220T200607Z-d5e4a
Coords(x=1, y=2)
Caching¶
The primary purpose of pins is to make it easy to share data.
But pins is also designed to help you spend as little time as possible downloading data.
pin_read()
and pin_download()
automatically cache remote pins: they maintain a local copy of the data (so it’s fast) but always check that it’s up-to-date (so your analysis doesn’t use stale data).
Wouldn’t it be nice if you could take advantage of this feature for any dataset on the internet?
That’s the idea behind board_url()
— you can assemble your own board from datasets, wherever they live on the internet.
For example, this code creates a board containing a single pin, penguins
, that refers to some fun data I found on GitHub:
my_data = board_urls("", {
"penguins": "https://raw.githubusercontent.com/allisonhorst/palmerpenguins/master/inst/extdata/penguins_raw.csv"
})
You can read this data by combining pin_download()
with read.csv()
1:
fname = my_data.pin_download("penguins")
fname
['/home/runner/.cache/pins-py/http_e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855/e6ac0d2da33fad7e72df6b900933a691b89ed7d54ec0e4a36fe45c32d7e2f67e_penguins_raw.csv']
import pandas as pd
pd.read_csv(fname[0]).head()
studyName | Sample Number | Species | Region | Island | Stage | Individual ID | Clutch Completion | Date Egg | Culmen Length (mm) | Culmen Depth (mm) | Flipper Length (mm) | Body Mass (g) | Sex | Delta 15 N (o/oo) | Delta 13 C (o/oo) | Comments | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | PAL0708 | 1 | Adelie Penguin (Pygoscelis adeliae) | Anvers | Torgersen | Adult, 1 Egg Stage | N1A1 | Yes | 2007-11-11 | 39.1 | 18.7 | 181.0 | 3750.0 | MALE | NaN | NaN | Not enough blood for isotopes. |
1 | PAL0708 | 2 | Adelie Penguin (Pygoscelis adeliae) | Anvers | Torgersen | Adult, 1 Egg Stage | N1A2 | Yes | 2007-11-11 | 39.5 | 17.4 | 186.0 | 3800.0 | FEMALE | 8.94956 | -24.69454 | NaN |
2 | PAL0708 | 3 | Adelie Penguin (Pygoscelis adeliae) | Anvers | Torgersen | Adult, 1 Egg Stage | N2A1 | Yes | 2007-11-16 | 40.3 | 18.0 | 195.0 | 3250.0 | FEMALE | 8.36821 | -25.33302 | NaN |
3 | PAL0708 | 4 | Adelie Penguin (Pygoscelis adeliae) | Anvers | Torgersen | Adult, 1 Egg Stage | N2A2 | Yes | 2007-11-16 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | Adult not sampled. |
4 | PAL0708 | 5 | Adelie Penguin (Pygoscelis adeliae) | Anvers | Torgersen | Adult, 1 Egg Stage | N3A1 | Yes | 2007-11-16 | 36.7 | 19.3 | 193.0 | 3450.0 | FEMALE | 8.76651 | -25.32426 | NaN |
my_data.pin_download("penguins")
['/home/runner/.cache/pins-py/http_e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855/e6ac0d2da33fad7e72df6b900933a691b89ed7d54ec0e4a36fe45c32d7e2f67e_penguins_raw.csv']
- 1
Here I’m using
read.csv()
to the reduce the dependencies of the pins package. For real code I’d recommend usingdata.table::fread()
orreadr::read_csv().