rasteret.ingest¶
Ingest drivers: source-specific logic that feeds into the Collection contract.
ingest
¶
Ingest builders: source-specific logic that feeds into the Collection contract.
Each builder knows how to read records from one source type (STAC API,
Parquet record tables, etc.) and normalise them into an Arrow table that
satisfies the Collection contract columns
(id, datetime, geometry, assets, scene_bbox,
plus optional proj:epsg, {band}_metadata, year, month).
The shared normalisation layer lives in :mod:rasteret.ingest.normalize.
Classes¶
CollectionBuilder
¶
CollectionBuilder(
*,
name: str = "",
data_source: str = "",
workspace_dir: str | Path | None = None,
)
Bases: ABC
Abstract base class for all collection builders.
Subclasses implement :meth:build to acquire data from their
specific source, normalise it, and return a Collection.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Human-readable collection name. |
''
|
data_source
|
str
|
Data source identifier for band mapping and URL policy. |
''
|
workspace_dir
|
Path
|
If set, persist the collection as partitioned Parquet. |
None
|
Source code in src/rasteret/ingest/base.py
RecordTableBuilder
¶
RecordTableBuilder(
path: str | Path,
*,
data_source: str = "",
column_map: dict[str, str] | None = None,
href_column: str | None = None,
band_index_map: dict[str, int] | None = None,
url_rewrite_patterns: dict[str, str] | None = None,
filesystem: Any | None = None,
columns: list[str] | None = None,
filter_expr: Expression | None = None,
name: str = "",
workspace_dir: str | Path | None = None,
enrich_cog: bool = False,
band_codes: list[str] | None = None,
max_concurrent: int = 300,
backend: StorageBackend | None = None,
)
Bases: CollectionBuilder
Build a Collection from an existing Parquet/GeoParquet table.
Reads a Parquet record table where each row is a raster item
with at minimum the four contract columns (id, datetime,
geometry, assets), or columns that can be normalised into
them via column_map, href_column, and band_index_map.
When enrich_cog=True, the builder parses COG headers from the
asset URLs and adds {band}_metadata struct columns, making
the resulting Collection suitable for fast tiled reads and TorchGeo
integration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str or Path
|
Path/URI to the Parquet/GeoParquet file or dataset directory. |
required |
data_source
|
str
|
Data-source identifier for the resulting Collection. |
''
|
column_map
|
dict
|
|
None
|
href_column
|
str
|
Column containing COG URLs. When set and |
None
|
band_index_map
|
dict
|
|
None
|
url_rewrite_patterns
|
dict
|
|
None
|
filesystem
|
FileSystem
|
PyArrow filesystem for reading remote URIs (e.g.
|
None
|
columns
|
list of str
|
Scan-time column projection. |
None
|
filter_expr
|
Expression
|
Scan-time predicate pushdown. |
None
|
enrich_cog
|
bool
|
If |
False
|
band_codes
|
list of str
|
Bands to enrich. If omitted, all bands found in the |
None
|
max_concurrent
|
int
|
Maximum concurrent HTTP connections for COG header parsing. |
300
|
name
|
str
|
Collection name. Passed through to the normalisation layer. |
''
|
workspace_dir
|
str or Path
|
If provided, persist the resulting Collection as Parquet here. |
None
|
backend
|
StorageBackend
|
I/O backend for authenticated range reads during COG header parsing. |
None
|
Source code in src/rasteret/ingest/parquet_record_table.py
Functions¶
build
¶
Read the record table and return a normalized Collection.
Pipeline: read -> alias -> prepare -> enrich -> normalize.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**kwargs
|
Any
|
|
{}
|
Returns:
| Type | Description |
|---|---|
Collection
|
|
Source code in src/rasteret/ingest/parquet_record_table.py
StacCollectionBuilder
¶
StacCollectionBuilder(
data_source: str,
stac_api: str,
stac_collection: str | None = None,
workspace_dir: Path | None = None,
name: str | None = None,
band_map: dict[str, str] | None = None,
band_index_map: dict[str, int] | None = None,
cloud_config: CloudConfig | None = None,
max_concurrent: int = 300,
backend: StorageBackend | None = None,
static_catalog: bool = False,
)
Bases: CollectionBuilder
Build a Collection from a STAC API search or static catalog.
Searches a STAC API (or traverses a static STAC catalog when
static_catalog=True), parses COG headers for tile metadata,
and produces a Parquet-backed Collection with per-band acceleration
columns.
Source code in src/rasteret/ingest/stac_indexer.py
Attributes¶
Functions¶
build
¶
Build a Collection from STAC (sync wrapper).
Accepts bbox, date_range, query keyword arguments.
Delegates to the async :meth:build_index.
Source code in src/rasteret/ingest/stac_indexer.py
build_index
async
¶
build_index(
bbox: BoundingBox | None = None,
date_range: DateRange | None = None,
query: dict[str, Any] | None = None,
)
Build GeoParquet collection from STAC search (async).
Returns a :class:~rasteret.core.collection.Collection.
Source code in src/rasteret/ingest/stac_indexer.py
Functions¶
add_band_metadata_columns
¶
add_band_metadata_columns(
table: Table,
band_codes: list[str],
processed_items: list[dict],
) -> Table
Append {band}_metadata struct columns from parsed COG headers.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
table
|
Table
|
Arrow table with an |
required |
band_codes
|
list of str
|
Band codes to create columns for. |
required |
processed_items
|
list of dict
|
Each dict must have |
required |
Returns:
| Type | Description |
|---|---|
Table
|
Input table with |
Source code in src/rasteret/ingest/enrich.py
build_url_index_from_assets
¶
build_url_index_from_assets(
table: Table, band_codes: list[str] | None = None
) -> dict[str, dict[str, dict[str, Any]]]
Build {record_id: {band_code: {url, band_index}}} from assets.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
table
|
Table
|
Must contain |
required |
band_codes
|
list of str
|
If given, only include these bands. Otherwise include all. |
None
|
Returns:
| Type | Description |
|---|---|
dict
|
Nested mapping of record ID -> band code -> asset reference dict. The asset reference dict contains:
|
Source code in src/rasteret/ingest/enrich.py
enrich_table_with_cog_metadata
async
¶
enrich_table_with_cog_metadata(
table: Table,
url_index: dict[str, dict[str, dict[str, Any]]],
band_codes: list[str],
*,
max_concurrent: int = 300,
batch_size: int = 100,
backend: StorageBackend | None = None,
) -> Table
Parse COG headers and add {band}_metadata columns.
This is the high-level entry point for builders that have a
url_index but have not yet parsed COG headers.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
table
|
Table
|
Arrow table with an |
required |
url_index
|
dict
|
Mapping of |
required |
band_codes
|
list of str
|
Band codes to create metadata columns for. |
required |
max_concurrent
|
int
|
Maximum concurrent HTTP connections. |
300
|
batch_size
|
int
|
Batch size for COG header parsing. |
100
|
backend
|
StorageBackend
|
I/O backend for authenticated range reads during COG header parsing. When omitted, uses the default auto-detecting backend. |
None
|
Returns:
| Type | Description |
|---|---|
Table
|
Table with |
Source code in src/rasteret/ingest/enrich.py
227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 | |
build_collection_from_table
¶
build_collection_from_table(
table: Table,
*,
name: str = "",
description: str = "",
data_source: str = "",
date_range: tuple[str, str] | None = None,
workspace_dir: str | Path | None = None,
partition_cols: Sequence[str] = ("year", "month"),
) -> Any
Normalise an Arrow table into a Collection.
Validates the Collection contract columns, adds scene_bbox
and partition columns when missing, and optionally materialises
to Parquet.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
table
|
Table
|
Arrow table with at least the required columns. |
required |
name
|
str
|
Human-readable collection name. |
''
|
description
|
str
|
Free-text description. |
''
|
data_source
|
str
|
Data source identifier (e.g. |
''
|
date_range
|
tuple[str, str] | None
|
|
None
|
workspace_dir
|
str | Path | None
|
If provided, persist the collection as partitioned Parquet here. |
None
|
partition_cols
|
Sequence[str]
|
Columns to partition by when writing Parquet. |
('year', 'month')
|
Returns:
| Type | Description |
|---|---|
Collection
|
|
Source code in src/rasteret/ingest/normalize.py
125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 | |