Skip to main content

Logging

EasyFabric provides a structured logging system for Microsoft Fabric notebooks. It produces one log file per pipeline run, stored in OneLake (Meta lakehouse). All notebooks in the same execution tree share the same log file automatically.

Overview

  • Each pipeline run gets a single log file, identified by Fabric's activityId
  • Parent and child notebooks write to the same file — no manual wiring needed
  • Log entries can be persisted to a SQL table (Meta.dbo.logging) for querying
  • Logs include structured fields: source, object name, type, category, and message

Quick start — standalone notebook

The simplest case: a single notebook that initializes logging, does some work, and finalizes.

from easyfabric.fabric import init_logging, log_segment, save_log_file_to_table

init_logging(log_source="Silver", log_object="Customers")

with log_segment("Load", "Customers"):
# your loading logic here
pass

save_log_file_to_table()

init_logging creates a log file in the Meta lakehouse under Files/Logs/{date}/{activityId}.log. The log_segment context manager logs START and END entries around your code. save_log_file_to_table always logs an END entry with duration for the current notebook, and persists log entries to the database when called from the top-level notebook.

Quick start — DAG notebook

In a typical DAG, a parent notebook orchestrates multiple child notebooks (Bronze, Silver, Gold). Each child calls init_logging at the top — this is safe because init_logging is idempotent. It detects the shared activityId from Fabric's runtime context and reuses the existing log file.

Parent notebook (DAG):

from easyfabric.fabric import init_logging, log_segment, save_log_file_to_table

init_logging(log_source="Sys", log_object="DAG_Daily")

with log_segment("Orchestration", "Bronze"):
notebookutils.notebook.run("Bronze_Load")

with log_segment("Orchestration", "Silver"):
notebookutils.notebook.run("Silver_Load")

with log_segment("Orchestration", "Gold"):
notebookutils.notebook.run("Gold_Load")

save_log_file_to_table()

Child notebook (e.g. Bronze_Load):

from easyfabric.fabric import init_logging, log_segment, save_log_file_to_table

# Safe to call again — reuses the parent's log file
init_logging(log_source="Bronze", log_object="Customers")

with log_segment("Load", "Customers"):
# bronze loading logic
pass

save_log_file_to_table()

Every notebook in the tree calls init_logging and save_log_file_to_table. Each call logs an END entry with duration for that notebook. The top-level notebook's call additionally persists all log entries to the database — so any notebook can be run independently as the entry point.

Parameters

init_logging(log_source, log_object)

Initializes the logging session. Creates (or reuses) a log file on OneLake and configures the Python root logger.

ParameterTypeDefaultDescription
log_sourcestr"Sys"The logical source layer (e.g., "Bronze", "Silver", "Gold", "Sys").
log_objectstrNoneThe object being processed (e.g., "Customers"). Defaults to the current notebook name if not provided.

Returns: The absolute OneLake path of the log file (str).

log_segment(type, name)

Context manager that logs START and END entries around a block of code. If an exception occurs, it logs the error and re-raises.

ParameterTypeDescription
typestrThe log category (e.g., "Load", "Transform", "Orchestration").
namestrA descriptive name for the segment (e.g., "Customers", "Bronze Loading").

save_log_file_to_table(end_log)

Call at the end of every notebook. Always logs an END entry with total duration for the current notebook. When called from the top-level notebook (or when end_log=True), it also persists all log entries to Meta.dbo.logging, clears handlers, and resets state.

ParameterTypeDefaultDescription
end_logboolFalseForce log persistence and cleanup even if not the top-level notebook.

How it works — the notebook hierarchy

Fabric provides runtime context identifiers that EasyFabric uses to coordinate logging across notebooks:

IdentifierScopeUsed for
activityIdShared across the entire execution treeLog file name and batch_id in the logging table
currentRunIdUnique per notebook executionDistinguishing log entries from different notebooks
parentRunIdReferences the parent notebook's runTracking the notebook call hierarchy
rootNotebookIdReferences the top-level notebookDetermining which notebook is the entry point (is_top_level_notebook())

Because all notebooks share the same activityId, they all write to the same log file — no configuration needed.

Error handling

log_segment catches exceptions, logs them as errors, and re-raises. You don't need to add try/except blocks for logging purposes.

with log_segment("Load", "Customers"):
df = spark.read.csv("missing_file.csv") # raises an exception
# log_segment logs: "END: Customers (Failed): [error message]"
# then re-raises the original exception

Errors are captured both in the OneLake log file and in the Meta.dbo.logging table (when save_log_file_to_table is called).

Important notes

  • Call init_logging() at the top of every notebook — it's idempotent and safe to call multiple times. If the log file already exists for the current activityId, it reuses it.
  • Call save_log_file_to_table() at the end of every notebook — it always logs an END entry with duration, and automatically handles persistence when called from the top-level notebook.
  • Interactive sessions — when running a notebook interactively (not via a pipeline), the log file is prefixed with usr_ (e.g., usr_{activityId}.log).
  • Local development — without Fabric context (e.g., running locally), a UUID fallback is used as the activity ID.