Skip to content

Parsers

The gmat_run.parsers subpackage parses GMAT's output formats into pandas.DataFrame objects. Each parser exposes a parse(path) -> pandas.DataFrame function and depends only on the file layout — no gmatpy import, no GMAT install required. This lets parser logic be unit-tested against fixture files alone.

You normally do not call these directly: Mission.run returns a Results whose .reports, .ephemerides, and .contacts mappings already dispatch to the right parser based on the file's content. Reach for these functions only when you have a stand-alone GMAT output file you want to load without driving a Mission, or when you are writing your own dispatch logic.

ReportFile

parse

parse(path: str | PathLike[str]) -> DataFrame

Parse a GMAT ReportFile into a :class:pandas.DataFrame.

Column names are taken verbatim from the header row (dots preserved, e.g. Sat.Earth.SMA). Each column is coerced to int64 or float64 if every value parses as numeric; otherwise the column stays object/str.

Parameters:

Name Type Description Default
path str | PathLike[str]

Path to the ReportFile on disk.

required

Returns:

Type Description
DataFrame

A DataFrame with one row per report event and one column per header

DataFrame

field, in header order.

Raises:

Type Description
GmatOutputParseError

The file is empty, or a data row's column count does not match the header.

EphemerisFile (CCSDS-OEM, text)

parse

parse(path: str | PathLike[str]) -> DataFrame

Parse a CCSDS-OEM ephemeris file into a :class:pandas.DataFrame.

Parameters:

Name Type Description Default
path str | PathLike[str]

Path to the .oem (or .eph) file on disk.

required

Returns:

Type Description
DataFrame

A DataFrame with one row per state record. Columns are Epoch (a

DataFrame

datetime64[ns] column) plus X, Y, Z, VX, VY,

DataFrame

VZ (all float64). Segments are concatenated in file order.

DataFrame

Metadata surfaces on df.attrs:

DataFrame
  • df.attrs["epoch_scales"] = {"Epoch": time_scale} mirrors the convention from :mod:gmat_run.parsers.epoch.
DataFrame
  • Flat keys (object_name, central_body, coordinate_system, time_scale, interpolation, interpolation_degree) are set when every segment agrees.
DataFrame
  • df.attrs["segments"] lists the full per-segment metadata dict (only present when more than one segment was parsed).
DataFrame
  • df.attrs["file_header"] carries the pre-segment header keys (CCSDS_OEM_VERS, CREATION_DATE, ORIGINATOR).

Raises:

Type Description
GmatOutputParseError

The file is empty, no META_START block was found, a meta line is malformed, a record's column count is wrong, or an epoch / state value cannot be parsed.

EphemerisFile (STK-TimePosVel, text)

parse

parse(path: str | PathLike[str]) -> DataFrame

Parse an STK-TimePosVel ephemeris file into a :class:pandas.DataFrame.

Parameters:

Name Type Description Default
path str | PathLike[str]

Path to the .e file on disk.

required

Returns:

Type Description
DataFrame

A DataFrame with one row per state record. Columns are Epoch (a

DataFrame

datetime64[ns] column with the offset added to ScenarioEpoch)

DataFrame

plus X, Y, Z, VX, VY, VZ (all float64).

DataFrame

Metadata surfaces on df.attrs:

DataFrame
  • df.attrs["epoch_scales"] = {"Epoch": "UTC"} — see module docstring for the UTC default rationale.
DataFrame
  • Flat keys (central_body, coordinate_system, interpolation, interpolation_degree, distance_unit, time_scale) for whichever meta keys the file declared.
DataFrame
  • df.attrs["scenario_epoch"] carries the raw ScenarioEpoch text so callers can re-interpret if their mission used a non-UTC EpochFormat.
DataFrame
  • df.attrs["file_header"] carries the version banner and any # WrittenBy … comment lines from above BEGIN Ephemeris.

Raises:

Type Description
GmatOutputParseError

The file is empty, missing a stk.v.X.Y banner, missing BEGIN Ephemeris / ScenarioEpoch / EphemerisTimePosVel, declares an unsupported data section (EphemerisTimePos, EphemerisTimePosVelAcc), or contains a malformed meta line, record column count, offset, state value, or ScenarioEpoch.

is_stk_ephemeris

is_stk_ephemeris(path: str | PathLike[str]) -> bool

Return True if path looks like an STK-TimePosVel ephemeris.

Sniffs the file's first non-blank, non-comment line for an stk.v.X.Y banner. Used by :mod:gmat_run.results to dispatch to the right parser without relying on file extension.

EphemerisFile (SPK, binary)

parse

parse(
    path: str | PathLike[str],
    *,
    sampling_step: float | None = None,
) -> DataFrame

Parse an SPK ephemeris file into a :class:pandas.DataFrame.

Parameters:

Name Type Description Default
path str | PathLike[str]

Path to the .bsp file on disk.

required
sampling_step float | None

Output cadence in seconds. None (default) returns one row per segment endpoint; a positive float uniformly samples the union coverage window at that cadence.

None

Returns:

Type Description
DataFrame

A DataFrame with one row per state. Columns are Epoch

DataFrame

(datetime64[ns], UTC) plus X, Y, Z, VX,

DataFrame

VY, VZ (all float64, kilometres / km·s⁻¹ — SPICE's

DataFrame

native units, matching the OEM/STK parsers' km defaults).

DataFrame

Metadata surfaces on df.attrs:

DataFrame
  • epoch_scales = {"Epoch": "UTC"} — see module docstring.
DataFrame
  • time_scale = "TDB" — the kernel's native scale; the Epoch column is converted from this to UTC.
DataFrame
  • target_body, observer_body — common names where SPICE can resolve them, otherwise the integer NAIF IDs as strings.
DataFrame
  • coordinate_system — the segment's reference frame (e.g. "J2000").
DataFrame
  • coverage_start, coverage_stop — UTC Timestamps spanning the kernel's union coverage.
DataFrame
  • sampling_step — the value passed in (None or float).
DataFrame
  • file_header = {"daf_id": "DAF/SPK", "internal_filename": "..."}.

Raises:

Type Description
GmatOutputParseError

spiceypy is not installed; the file is unreadable or not a DAF/SPK kernel; the kernel has no segments; the kernel mixes (target, observer, frame) tuples; or sampling_step is non-positive.

is_spk_ephemeris

is_spk_ephemeris(path: str | PathLike[str]) -> bool

Return True if path looks like a DAF/SPK kernel.

Reads the first eight bytes and matches the ASCII "DAF/SPK " magic. Used by :mod:gmat_run.results to dispatch by content rather than by extension. Pure-bytes — does not import spiceypy.

CCSDS-AEM attitude ephemeris

parse

parse(path: str | PathLike[str]) -> DataFrame

Parse a CCSDS-AEM attitude ephemeris file into a :class:pandas.DataFrame.

Parameters:

Name Type Description Default
path str | PathLike[str]

Path to the .aem file on disk.

required

Returns:

Type Description
DataFrame

A DataFrame with one row per attitude record. Columns depend on the

DataFrame

file's ATTITUDE_TYPE:

DataFrame
  • QUATERNIONEpoch plus Q1, Q2, Q3, Q4 in source order; df.attrs["quaternion_type"] ("LAST" / "FIRST") names the scalar component.
DataFrame
  • EULER_ANGLEEpoch plus EulerAngle1, EulerAngle2, EulerAngle3; df.attrs["euler_rot_seq"] carries the rotation sequence ("321" etc.).
DataFrame

Common metadata surfaces on df.attrs:

DataFrame
  • df.attrs["epoch_scales"] = {"Epoch": time_scale} mirrors the convention from :mod:gmat_run.parsers.epoch.
DataFrame
  • Flat keys (attitude_type, object_name, object_id, center_name, ref_frame_a, ref_frame_b, attitude_dir, time_scale, interpolation, interpolation_degree, plus the type-specific quaternion_type / euler_rot_seq) are set when every segment agrees on them.
DataFrame
  • df.attrs["segments"] lists the full per-segment metadata dict (only present when more than one segment was parsed).
DataFrame
  • df.attrs["file_header"] carries the pre-segment header keys (CCSDS_AEM_VERS, CREATION_DATE, ORIGINATOR).

Raises:

Type Description
GmatOutputParseError

The file is empty, no META_START block was found, a meta line is malformed, ATTITUDE_TYPE is missing or unsupported, the DATA_START/DATA_STOP brackets are unbalanced, segments declare different ATTITUDE_TYPEs, a record's column count is wrong, or an epoch / numeric value cannot be parsed.

is_aem_ephemeris

is_aem_ephemeris(path: str | PathLike[str]) -> bool

Return True if path looks like a CCSDS-AEM file.

Sniffs the first non-blank, non-comment line for a CCSDS_AEM_VERS = … header. Cheap enough to run on every candidate file when classifying a directory's worth of attitude artefacts.

ContactLocator

parse

parse(path: str | PathLike[str]) -> DataFrame

Parse a GMAT ContactLocator report into a :class:pandas.DataFrame.

The returned schema depends on the underlying ContactLocator.ReportFormat:

  • LegacyObserver, Start, Stop, Duration.
  • ContactRangeReportObserver, Duration, Start, Stop, StartRange, StopRange.
  • SiteViewMaxElevationReportObserver, Start, Stop, Duration, MaxElevation, MaxElevationTime.
  • SiteViewMaxElevationRangeReportObserver, Start, Stop, Duration, MaxElevation, MaxElevationTime, StartRange, StopRange.
  • AzimuthElevationRangeReportPass, Observer, Time, Azimuth, Elevation, Range.
  • AzimuthElevationRangeRangeRateReport → as above plus RangeRate.

Time columns are datetime64[ns]; Duration is timedelta64[ns]; Pass is int64; Observer is object; numeric value columns (ranges, angles, rates) are float64.

df.attrs:

  • target — single str, the Target resource name from the file.
  • report_format — variant name as GMAT spells it (e.g. "Legacy").
  • time_scale — always "UTC" in current GMAT releases.
  • epoch_scales{datetime_column: "UTC"} for every parsed time column.
  • event_counts{observer: int}. Lifted from Legacy's Number of events : N lines; derived for the per-event variants via a row-count groupby on Observer; derived for the AzEl variants via a Pass nunique per observer.

Parameters:

Name Type Description Default
path str | PathLike[str]

Path to the ContactLocator text report on disk.

required

Returns:

Type Description
DataFrame

A DataFrame with one row per event (or per (Pass, tick) for the AzEl

DataFrame

variants), in source order.

Raises:

Type Description
GmatOutputParseError

The file is empty, missing a Target: line, an empty Legacy file (no Observer: blocks), the column-header fingerprint matches no known ReportFormat, a data row has the wrong column count, an epoch value is malformed, or a numeric value does not parse.

Epoch promotion

promote_epochs

promote_epochs(df: DataFrame) -> DataFrame

Promote recognised epoch columns in df to datetime64[ns] in place.

The DataFrame is mutated and returned so callers can chain. Time scales for promoted columns are recorded under df.attrs["epoch_scales"] as a {column_name: scale_string} dict. scale_string is one of "A1", "TAI", "UTC", "TT", "TDB".

Idempotent: calling this twice on the same frame is a no-op on the second pass, because promoted columns are already datetime64[ns] and are skipped.

Parameters:

Name Type Description Default
df DataFrame

DataFrame whose columns follow GMAT's {resource}.{field} naming convention. Columns whose last segment is one of the ten recognised epoch suffixes are promoted.

required

Returns:

Type Description
DataFrame

df itself, mutated in place.

Raises:

Type Description
GmatOutputParseError

A recognised epoch column contains values that cannot be parsed (malformed Gregorian text, non-numeric ModJulian, or a ModJulian value that overflows datetime64[ns]'s range).