Skip to content

Conversation

@cfichtlscherer
Copy link
Contributor

Description

This feature adds the settings.activity_based_timing feature to OpenMC. When enabled, each source's strength is interpreted as its activity in Bq, and each particle is assigned a birth timestamp sampled from an independent per-source Poisson process.

This intentionally reuses source.strength rather than introducing a separate source.activity parameter, since both control "how much" a source contributes (having both leads to ambiguous semantics and redundant logic).

The motivation is that, combined with #3785, this gives OpenMC the capability to model coincidence gamma detector simulations: simulate coincident sources with activity_based_timing and tally heating with a fine TimeFilter in the detector's active volumes.

Checklist

  • I have performed a self-review of my own code
  • I have run clang-format (version 15) on any C++ source files (if applicable)
  • I have followed the style guidelines for Python source files (if applicable)
  • I have made corresponding changes to the documentation (if applicable)
  • I have added tests that prove my fix is effective or that my feature works (if applicable)

@GuySten
Copy link
Contributor

GuySten commented Feb 11, 2026

I suggest implementing the activity based timing in the following way:

  1. Implement PoissonProcess as a new stochastic process in openmc.stats.
  2. Implement saving and loading from xml for this PoissonProcess.
  3. Make openmc source accept stochastic processes in the time variable
  4. Add functionality in the python api to auto generate PoissonProcess from the strength of the source. You can use something like source.set_time_based_on_strength_as_activity(). Maybe something shorter 😀.

@GuySten
Copy link
Contributor

GuySten commented Feb 12, 2026

I have a few design questions to understand this feature:

  1. Does increasing particle number does not mean better statistics but increase of simulated time window?
  2. What is the meaning of batches and generations?
  3. How do you converge results?
  4. Does information about this feature appear in the literature?

This feature is more complicated than I thought.
@paulromano, can you provide your thoughts about this PR?

@cfichtlscherer
Copy link
Contributor Author

@GuySten, thanks for your feedback and questions:

1. Does increasing particle number does not mean better statistics but increase of simulated time window?
Yes, the particle number indirectly controls the duration of the simulated scenario. To get better statistics for the same time window, you'd need to run more batches (see below).

2. What is the meaning of batches and generations?
Each batch is an independent realization of the same Poisson process over the same time window (determined by n_particles). Tallies are then averaged across batches to get mean values and uncertainties.
Generations within a batch are typically 1 in fixed-source mode and don't play a meaningful role here.

3. How do you converge results?
Run multiple batches.

4. Does information about this feature appear in the literature?
I haven't seen this before in other codes. However, the underlying concepts are well-established.

@GuySten
Copy link
Contributor

GuySten commented Feb 12, 2026

I suggest the following changes:

  1. PoissonProcess should be distinct from Distribution.
  2. We should maintain a list of PoissonProcesses. Each process should have a counter with the last time it sampled.
  3. When sampling a time we should increase this counter and return it (This should be in an atomic way using pragma omp capture)
  4. When using mpi parallelization we should divide each PoissonProcess rate by the number of mpi workers.
  5. When starting a new generation/batch we will have to reset all PoissonProcesses counters

I think in that way we will not have to pre compute decay times and we will get cleaner code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants