Fix empty bibliography in CI docs build#1142
Conversation
Doxygen uses bib2xhtml.pl which shells out to bibtex to format the bibliography. Without bibtex installed, citelist.html is empty and all \cite references render as plain text. Also removes duplicate graphviz entry. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
CodeAnt AI is reviewing your PR. Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
|
CodeAnt AI finished reviewing your PR. |
There was a problem hiding this comment.
Pull request overview
Updates the documentation CI workflow to ensure bibliography generation works during the Doxygen build by making bibtex available in the runner environment.
Changes:
- Install
texlive-binariesin the docs workflow to providebibtexfor Doxygen’s bibliography processing. - Remove a duplicate
graphvizentry from the apt install list.
… fixes - Add math_symbol field to ParamDef for LaTeX symbol mapping (e.g. gamma -> γ_k) - Define ~70 symbols inline on _r() calls in definitions.py (single source of truth) - Show Symbol column in auto-generated parameters.md for physics parameters - Extend lint_docs.py to validate param refs in case.md (518 refs, was unchecked) - Add @ref cross-link validation between doc pages - Fix doc bugs: radiue->radius, t_step_end->t_step_stop, remove stale tau_wrt row - Add cross-page @ref links between equations.md, case.md, and parameters.md - Update contributing guide Step 1 to document math= and desc= kwargs Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughChanges include documentation restructuring with enhanced cross-references and mathematical formatting, a parameter system enhancement to support mathematical symbols for parameters, expanded documentation linting to validate page references, build configuration adjustments, and header cleanup across approximately 50 Fortran source files with conditional GPU declaration annotations. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
No actionable comments were generated in the recent review. 🎉 Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@toolchain/mfc/lint_docs.py`:
- Around line 123-143: The family-prefix fallback in _is_valid_param is too
permissive (any(p.startswith(base))) and should be tightened: replace that check
with a stricter test that only accepts true family-prefix matches (e.g., require
an underscore delimiter), for example by checking any(p.startswith(base + "_")
or base.startswith(p + "_") for p in valid_params) so attributes like bc_x%...
aren't accepted by loose partial matches; update the condition in
_is_valid_param accordingly.
In `@toolchain/mfc/params/definitions.py`:
- Around line 892-894: The parameter "nb" is registered with type REAL but
represents a bubble bin count and must be an integer; update the registration
call for "nb" (the _r(...) invocation that currently uses REAL for the "nb"
symbol) to use the integer type constant (e.g., INT or INTEGER consistent with
other definitions) so schema validation enforces integer values and matches
Fortran expectations; preserve the same metadata set {"bubbles"} and math string
r"\f$N_b\f$".
Match family prefixes only at structural boundaries to avoid silently accepting typos like patch_ic as valid. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Tighten compound-param validation to require both a known family prefix and a known attribute. Immediately caught two doc bugs: z_comain -> z_domain typo, model%%filepath -> model_filepath. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Remove filename args from @file directives in all 52 .fpp files - Add @param docs to 13 undocumented subroutines, remove duplicate !< inline comments - Convert raw $...$ math to Doxygen \f$...\f$ in m_riemann_solvers.fpp - Add @cond/@endcond around #ifdef blocks that confuse Doxygen parser - Fix getting-started.md HTML (<h3> in <summary>), contributing.md (@: escaping) - Fix stale @param names in m_data_input.f90 - Remove obsolete Doxyfile tags (HTML_TIMESTAMP, LATEX_TIMESTAMP), fix PROJECT_NAME quoting - Add stable {#id} anchors to generated case_constraints.md and cli-reference.md - Add _escape_doxygen() helper for CLI help text, add missing precheck command - Extend lint_docs.py with math syntax and section anchor checks Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| impure subroutine s_apply_ib_patches(ib_markers_sf, levelset, levelset_norm) | ||
|
|
||
| integer, dimension(:, :, :), intent(inout), optional :: ib_markers_sf | ||
| type(levelset_field), intent(inout), optional :: levelset !< Levelset determined by models | ||
| type(levelset_norm_field), intent(inout), optional :: levelset_norm !< Levelset_norm determined by models | ||
| type(levelset_field), intent(inout), optional :: levelset | ||
| type(levelset_norm_field), intent(inout), optional :: levelset_norm | ||
|
|
||
| integer :: i | ||
|
|
||
| ! 3D Patch Geometries | ||
| ! Patch Geometries | ||
| if (p > 0) then | ||
|
|
||
| !> IB Patches | ||
| !> @{ | ||
| ! Spherical patch | ||
| ! patch | ||
| do i = 1, num_ibs | ||
|
|
||
| if (patch_ib(i)%geometry == 8) then | ||
| call s_ib_sphere(i, ib_markers_sf) | ||
| call s_sphere_levelset(i, levelset, levelset_norm) | ||
| elseif (patch_ib(i)%geometry == 9) then | ||
| call s_ib_cuboid(i, ib_markers_sf) | ||
| call s_cuboid_levelset(i, levelset, levelset_norm) | ||
| elseif (patch_ib(i)%geometry == 10) then | ||
| call s_ib_cylinder(i, ib_markers_sf) | ||
| call s_cylinder_levelset(i, levelset, levelset_norm) | ||
| elseif (patch_ib(i)%geometry == 11) then | ||
| call s_ib_3D_airfoil(i, ib_markers_sf) | ||
| call s_3D_airfoil_levelset(i, levelset, levelset_norm) | ||
| ! STL+IBM patch | ||
| ! patch | ||
| elseif (patch_ib(i)%geometry == 12) then | ||
| call s_ib_model(i, ib_markers_sf, levelset, levelset_norm) | ||
| end if | ||
| end do | ||
| !> @} |
There was a problem hiding this comment.
Suggestion: Add present() checks before using the optional arguments levelset and levelset_norm in the s_apply_ib_patches subroutine to prevent potential runtime errors. Abort with an error message if they are required but not provided. [possible issue, importance: 8]
| impure subroutine s_apply_ib_patches(ib_markers_sf, levelset, levelset_norm) | |
| integer, dimension(:, :, :), intent(inout), optional :: ib_markers_sf | |
| type(levelset_field), intent(inout), optional :: levelset | |
| type(levelset_norm_field), intent(inout), optional :: levelset_norm | |
| integer :: i | |
| ! Patch Geometries | |
| if (p > 0) then | |
| do i = 1, num_ibs | |
| if (patch_ib(i)%geometry == 8) then | |
| call s_ib_sphere(i, ib_markers_sf) | |
| elseif (patch_ib(i)%geometry == 11) then | |
| call s_ib_3D_airfoil(i, ib_markers_sf) | |
| if (present(levelset) .and. present(levelset_norm)) then | |
| call s_3D_airfoil_levelset(i, levelset, levelset_norm) | |
| else | |
| call s_mpi_abort('Levelset fields must be provided for airfoil IB patches.') | |
| end if | |
| elseif (patch_ib(i)%geometry == 12) then | |
| if (present(levelset) .and. present(levelset_norm)) then | |
| call s_ib_model(i, ib_markers_sf, levelset, levelset_norm) | |
| else | |
| call s_mpi_abort('Levelset fields must be provided for STL+IBM IB patches.') | |
| end if | |
| end if | |
| end do |
| subroutine s_ib_model(patch_id, ib_markers_sf, STL_levelset, STL_levelset_norm) | ||
|
|
||
| integer, intent(in) :: patch_id | ||
| integer, dimension(0:m, 0:n, 0:p), intent(inout) :: ib_markers_sf | ||
|
|
||
| ! Variables for IBM+STL | ||
| type(levelset_field), optional, intent(inout) :: STL_levelset !< Levelset determined by models | ||
| type(levelset_norm_field), optional, intent(inout) :: STL_levelset_norm !< Levelset_norm determined by models | ||
| ! for IBM+STL | ||
| type(levelset_field), optional, intent(inout) :: STL_levelset | ||
| type(levelset_norm_field), optional, intent(inout) :: STL_levelset_norm | ||
| real(wp) :: normals(1:3) !< Boundary normal buffer | ||
| integer :: boundary_vertex_count, boundary_edge_count, total_vertices !< Boundary vertex | ||
| real(wp), allocatable, dimension(:, :, :) :: boundary_v !< Boundary vertex buffer | ||
| real(wp), allocatable, dimension(:, :) :: interpolated_boundary_v !< Interpolated vertex buffer | ||
| real(wp) :: distance !< Levelset distance buffer | ||
| logical :: interpolate !< Logical variable to determine whether or not the model should be interpolated | ||
|
|
||
| integer :: i, j, k !< Generic loop iterators | ||
|
|
||
| type(t_bbox) :: bbox, bbox_old | ||
| type(t_model) :: model | ||
| type(ic_model_parameters) :: params | ||
|
|
||
| real(wp), dimension(1:3) :: point, model_center | ||
|
|
||
| real(wp) :: grid_mm(1:3, 1:2) | ||
|
|
||
| integer :: cell_num | ||
| integer :: ncells | ||
|
|
||
| real(wp), dimension(1:4, 1:4) :: transform, transform_n | ||
|
|
||
| print *, " * Reading model: "//trim(patch_ib(patch_id)%model_filepath) | ||
|
|
||
| model = f_model_read(patch_ib(patch_id)%model_filepath) | ||
| params%scale(:) = patch_ib(patch_id)%model_scale(:) | ||
| params%translate(:) = patch_ib(patch_id)%model_translate(:) | ||
| params%rotate(:) = patch_ib(patch_id)%model_rotate(:) | ||
| params%spc = patch_ib(patch_id)%model_spc | ||
| params%threshold = patch_ib(patch_id)%model_threshold | ||
|
|
||
| if (proc_rank == 0) then | ||
| print *, " * Transforming model." | ||
| end if | ||
|
|
||
| ! Get the model center before transforming the model | ||
| ! the model center before transforming the model | ||
| bbox_old = f_create_bbox(model) | ||
| model_center(1:3) = (bbox_old%min(1:3) + bbox_old%max(1:3))/2._wp | ||
|
|
||
| ! Compute the transform matrices for vertices and normals | ||
| ! the transform matrices for vertices and normals | ||
| transform = f_create_transform_matrix(params, model_center) | ||
| transform_n = f_create_transform_matrix(params) | ||
|
|
||
| call s_transform_model(model, transform, transform_n) | ||
|
|
||
| ! Recreate the bounding box after transformation | ||
| ! the bounding box after transformation | ||
| bbox = f_create_bbox(model) | ||
|
|
||
| ! Show the number of vertices in the original STL model | ||
| ! the number of vertices in the original STL model | ||
| if (proc_rank == 0) then | ||
| print *, ' * Number of input model vertices:', 3*model%ntrs | ||
| end if | ||
|
|
||
| call f_check_boundary(model, boundary_v, boundary_vertex_count, boundary_edge_count) | ||
|
|
||
| ! Check if the model needs interpolation | ||
| ! if the model needs interpolation | ||
| if (p > 0) then | ||
| call f_check_interpolation_3D(model, (/dx, dy, dz/), interpolate) | ||
| else | ||
| call f_check_interpolation_2D(boundary_v, boundary_edge_count, (/dx, dy, dz/), interpolate) | ||
| end if | ||
|
|
||
| ! Show the number of edges and boundary edges in 2D STL models | ||
| ! the number of edges and boundary edges in 2D STL models | ||
| if (proc_rank == 0 .and. p == 0) then | ||
| print *, ' * Number of 2D model boundary edges:', boundary_edge_count | ||
| end if | ||
|
|
||
| ! Interpolate the STL model along the edges (2D) and on triangle facets (3D) | ||
| ! the STL model along the edges (2D) and on triangle facets (3D) | ||
| if (interpolate) then | ||
| if (proc_rank == 0) then | ||
| print *, ' * Interpolating STL vertices.' | ||
| end if | ||
|
|
||
| if (p > 0) then | ||
| call f_interpolate_3D(model, (/dx, dy, dz/), interpolated_boundary_v, total_vertices) | ||
| else | ||
| call f_interpolate_2D(boundary_v, boundary_edge_count, (/dx, dy, dz/), interpolated_boundary_v, total_vertices) | ||
| end if | ||
|
|
||
| if (proc_rank == 0) then | ||
| print *, ' * Total number of interpolated boundary vertices:', total_vertices | ||
| end if | ||
| end if | ||
|
|
||
| if (proc_rank == 0) then | ||
| write (*, "(A, 3(2X, F20.10))") " > Model: Min:", bbox%min(1:3) | ||
| write (*, "(A, 3(2X, F20.10))") " > Cen:", (bbox%min(1:3) + bbox%max(1:3))/2._wp | ||
| write (*, "(A, 3(2X, F20.10))") " > Max:", bbox%max(1:3) | ||
|
|
||
| !call s_model_write("__out__.stl", model) | ||
| !call s_model_write("__out__.obj", model) | ||
|
|
||
| grid_mm(1, :) = (/minval(x_cc) - 0.e5_wp*dx, maxval(x_cc) + 0.e5_wp*dx/) | ||
| grid_mm(2, :) = (/minval(y_cc) - 0.e5_wp*dy, maxval(y_cc) + 0.e5_wp*dy/) | ||
|
|
||
| if (p > 0) then | ||
| grid_mm(3, :) = (/minval(z_cc) - 0.e5_wp*dz, maxval(z_cc) + 0.e5_wp*dz/) | ||
| else | ||
| grid_mm(3, :) = (/0._wp, 0._wp/) | ||
| end if | ||
|
|
||
| write (*, "(A, 3(2X, F20.10))") " > Domain: Min:", grid_mm(:, 1) | ||
| write (*, "(A, 3(2X, F20.10))") " > Cen:", (grid_mm(:, 1) + grid_mm(:, 2))/2._wp | ||
| write (*, "(A, 3(2X, F20.10))") " > Max:", grid_mm(:, 2) | ||
| end if | ||
|
|
||
| ncells = (m + 1)*(n + 1)*(p + 1) | ||
| do i = 0, m; do j = 0, n; do k = 0, p | ||
|
|
||
| cell_num = i*(n + 1)*(p + 1) + j*(p + 1) + (k + 1) | ||
| if (proc_rank == 0 .and. mod(cell_num, ncells/100) == 0) then | ||
| write (*, "(A, I3, A)", advance="no") & | ||
| char(13)//" * Generating grid: ", & | ||
| nint(100*real(cell_num)/ncells), "%" | ||
| end if | ||
|
|
||
| point = (/x_cc(i), y_cc(j), 0._wp/) | ||
| if (p > 0) then | ||
| point(3) = z_cc(k) | ||
| end if | ||
|
|
||
| if (grid_geometry == 3) then | ||
| point = f_convert_cyl_to_cart(point) | ||
| end if | ||
|
|
||
| eta = f_model_is_inside(model, point, (/dx, dy, dz/), patch_ib(patch_id)%model_spc) | ||
|
|
||
| ! Reading STL boundary vertices and compute the levelset and levelset_norm | ||
| ! STL boundary vertices and compute the levelset and levelset_norm | ||
| if (eta > patch_ib(patch_id)%model_threshold) then | ||
| ib_markers_sf(i, j, k) = patch_id | ||
| end if | ||
|
|
||
| ! 3D models | ||
| ! models | ||
| if (p > 0) then | ||
|
|
||
| ! Get the boundary normals and shortest distance between the cell center and the model boundary | ||
| ! the boundary normals and shortest distance between the cell center and the model boundary | ||
| call f_distance_normals_3D(model, point, normals, distance) | ||
|
|
||
| ! Get the shortest distance between the cell center and the interpolated model boundary | ||
| ! the shortest distance between the cell center and the interpolated model boundary | ||
| if (interpolate) then | ||
| STL_levelset%sf(i, j, k, patch_id) = f_interpolated_distance(interpolated_boundary_v, & | ||
| total_vertices, & | ||
| point) | ||
| else | ||
| STL_levelset%sf(i, j, k, patch_id) = distance | ||
| end if | ||
|
|
||
| ! Correct the sign of the levelset | ||
| ! the sign of the levelset | ||
| if (ib_markers_sf(i, j, k) > 0) then | ||
| STL_levelset%sf(i, j, k, patch_id) = -abs(STL_levelset%sf(i, j, k, patch_id)) | ||
| end if | ||
|
|
||
| ! Correct the sign of the levelset_norm | ||
| ! the sign of the levelset_norm | ||
| if (ib_markers_sf(i, j, k) == 0) then | ||
| normals(1:3) = -normals(1:3) | ||
| end if | ||
|
|
||
| ! Assign the levelset_norm | ||
| ! the levelset_norm | ||
| STL_levelset_norm%sf(i, j, k, patch_id, 1:3) = normals(1:3) | ||
| else | ||
| ! 2D models | ||
| ! models | ||
| if (interpolate) then | ||
| ! Get the shortest distance between the cell center and the model boundary | ||
| ! the shortest distance between the cell center and the model boundary | ||
| STL_levelset%sf(i, j, 0, patch_id) = f_interpolated_distance(interpolated_boundary_v, & | ||
| total_vertices, & | ||
| point) | ||
| else | ||
| ! Get the shortest distance between the cell center and the interpolated model boundary | ||
| ! the shortest distance between the cell center and the interpolated model boundary | ||
| STL_levelset%sf(i, j, 0, patch_id) = f_distance(boundary_v, & | ||
| boundary_edge_count, & | ||
| point) | ||
| end if | ||
|
|
||
| ! Correct the sign of the levelset | ||
| ! the sign of the levelset | ||
| if (ib_markers_sf(i, j, k) > 0) then | ||
| STL_levelset%sf(i, j, 0, patch_id) = -abs(STL_levelset%sf(i, j, 0, patch_id)) | ||
| end if | ||
|
|
||
| ! Get the boundary normals | ||
| ! the boundary normals | ||
| call f_normals(boundary_v, & | ||
| boundary_edge_count, & | ||
| point, & | ||
| normals) | ||
|
|
||
| ! Correct the sign of the levelset_norm | ||
| ! the sign of the levelset_norm | ||
| if (ib_markers_sf(i, j, k) == 0) then | ||
| normals(1:3) = -normals(1:3) | ||
| end if | ||
|
|
||
| ! Assign the levelset_norm | ||
| ! the levelset_norm | ||
| STL_levelset_norm%sf(i, j, k, patch_id, 1:3) = normals(1:3) | ||
|
|
||
| end if |
There was a problem hiding this comment.
Suggestion: In subroutine s_ib_model, add present() checks for the optional arguments STL_levelset and STL_levelset_norm before they are accessed to prevent potential runtime errors. [possible issue, importance: 8]
| subroutine s_ib_model(patch_id, ib_markers_sf, STL_levelset, STL_levelset_norm) | |
| integer, intent(in) :: patch_id | |
| integer, dimension(0:m, 0:n, 0:p), intent(inout) :: ib_markers_sf | |
| ! for IBM+STL | |
| type(levelset_field), optional, intent(inout) :: STL_levelset | |
| type(levelset_norm_field), optional, intent(inout) :: STL_levelset_norm | |
| real(wp) :: normals(1:3) !< Boundary normal buffer | |
| integer :: boundary_vertex_count, boundary_edge_count, total_vertices !< Boundary vertex | |
| real(wp), allocatable, dimension(:, :, :) :: boundary_v !< Boundary vertex buffer | |
| ... | |
| if (.not. (present(STL_levelset) .and. present(STL_levelset_norm))) then | |
| call s_mpi_abort('Levelset fields must be provided for STL+IBM patches.') | |
| end if | |
| ... | |
| if (interpolate) then | |
| STL_levelset%sf(i, j, k, patch_id) = f_interpolated_distance(interpolated_boundary_v, & | |
| total_vertices, & | |
| point) | |
| else | |
| STL_levelset%sf(i, j, k, patch_id) = distance | |
| end if | |
| ... | |
| STL_levelset_norm%sf(i, j, k, patch_id, 1:3) = normals(1:3) |
The manual references.md page just redirected to the auto-generated Bibliography (citelist). Remove it and point the sidebar link directly to \ref citelist to avoid two near-identical sidebar entries. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
src/simulation/m_riemann_solvers.fpp
Outdated
| !! | ||
| !! module m_riemann_solvers |
There was a problem hiding this comment.
✅ Suggestion: Restore the structured documentation header by re-adding the @file and @brief Doxygen/FORD tags. This will ensure the documentation generator can correctly parse the file header. [possible issue, importance: 6]
| !! | |
| !! module m_riemann_solvers | |
| !> | |
| !! @file m_riemann_solvers.fpp | |
| !! @brief Contains module m_riemann_solvers | |
| !! | |
| !! @details This module features a database of approximate and exact Riemann | |
| !! problem solvers for the Navier-Stokes system of equations. |
src/simulation/m_riemann_solvers.fpp
Outdated
| !! solution. For additional information please reference: | ||
| !! | ||
| !! | ||
| !! | ||
| !! | ||
| !! The left WENO-reconstructed cell-boundary values of the | ||
| !! variables | ||
| !! The right WENO-reconstructed cell-boundary values of the | ||
| !! variables | ||
| !! The left WENO-reconstructed cell-boundary values of the | ||
| !! spatial derivatives | ||
| !! The left WENO-reconstructed cell-boundary values of the | ||
| !! spatial derivatives | ||
| !! The left WENO-reconstructed cell-boundary values of the | ||
| !! spatial derivatives | ||
| !! The right WENO-reconstructed cell-boundary values of the | ||
| !! spatial derivatives | ||
| !! The right WENO-reconstructed cell-boundary values of the | ||
| !! spatial derivatives | ||
| !! The right WENO-reconstructed cell-boundary values of the | ||
| !! spatial derivatives | ||
| !! Left averaged gradient magnitude | ||
| !! Right averaged gradient magnitude | ||
| !! Intra-cell fluxes | ||
| !! Intra-cell fluxes sources | ||
| !! Intra-cell geometric fluxes sources | ||
| !! Dir. splitting direction | ||
| !! Index bounds in the x-dir | ||
| !! Index bounds in the y-dir | ||
| !! Index bounds in the z-dir | ||
| !! Cell-averaged primitive variables | ||
| subroutine s_riemann_solver(qL_prim_rsx_vf, qL_prim_rsy_vf, qL_prim_rsz_vf, dqL_prim_dx_vf, & |
There was a problem hiding this comment.
✅ Suggestion: Restore the @param tags for the arguments of the s_riemann_solver subroutine. This will fix the parsing of parameter documentation by documentation generation tools. [possible issue, importance: 7]
| !! solution. For additional information please reference: | |
| !! | |
| !! | |
| !! | |
| !! | |
| !! The left WENO-reconstructed cell-boundary values of the | |
| !! variables | |
| !! The right WENO-reconstructed cell-boundary values of the | |
| !! variables | |
| !! The left WENO-reconstructed cell-boundary values of the | |
| !! spatial derivatives | |
| !! The left WENO-reconstructed cell-boundary values of the | |
| !! spatial derivatives | |
| !! The left WENO-reconstructed cell-boundary values of the | |
| !! spatial derivatives | |
| !! The right WENO-reconstructed cell-boundary values of the | |
| !! spatial derivatives | |
| !! The right WENO-reconstructed cell-boundary values of the | |
| !! spatial derivatives | |
| !! The right WENO-reconstructed cell-boundary values of the | |
| !! spatial derivatives | |
| !! Left averaged gradient magnitude | |
| !! Right averaged gradient magnitude | |
| !! Intra-cell fluxes | |
| !! Intra-cell fluxes sources | |
| !! Intra-cell geometric fluxes sources | |
| !! Dir. splitting direction | |
| !! Index bounds in the x-dir | |
| !! Index bounds in the y-dir | |
| !! Index bounds in the z-dir | |
| !! Cell-averaged primitive variables | |
| subroutine s_riemann_solver(qL_prim_rsx_vf, qL_prim_rsy_vf, qL_prim_rsz_vf, dqL_prim_dx_vf, & | |
| !! Riemann problem solution. For additional information please reference: | |
| !! 1) s_hll_riemann_solver | |
| !! 2) s_hllc_riemann_solver | |
| !! 3) s_exact_riemann_solver | |
| !! 4) s_hlld_riemann_solver | |
| !! @param qL_prim_rsx_vf The left reconstructed interface states (x-sweep) | |
| !! @param qL_prim_rsy_vf The left reconstructed interface states (y-sweep) | |
| !! @param qL_prim_rsz_vf The left reconstructed interface states (z-sweep) | |
| !! @param dqL_prim_dx_vf Left first-order x-derivatives at interfaces | |
| !! @param dqL_prim_dy_vf Left first-order y-derivatives at interfaces | |
| !! @param dqL_prim_dz_vf Left first-order z-derivatives at interfaces | |
| !! @param qR_prim_rsx_vf The right reconstructed interface states (x-sweep) | |
| !! @param qR_prim_rsy_vf The right reconstructed interface states (y-sweep) | |
| !! @param qR_prim_rsz_vf The right reconstructed interface states (z-sweep) | |
| !! @param dqR_prim_dx_vf Right first-order x-derivatives at interfaces | |
| !! @param dqR_prim_dy_vf Right first-order y-derivatives at interfaces | |
| !! @param dqR_prim_dz_vf Right first-order z-derivatives at interfaces | |
| !! @param flux_vf Intra-cell fluxes | |
| !! @param flux_src_vf Intra-cell source fluxes | |
| !! @param flux_gsrc_vf Intra-cell geometric source fluxes | |
| !! @param norm_dir Dimensional splitting direction | |
| !! @param ix Index bounds in x-direction | |
| !! @param iy Index bounds in y-direction | |
| !! @param iz Index bounds in z-direction | |
| subroutine s_riemann_solver(qL_prim_rsx_vf, qL_prim_rsy_vf, qL_prim_rsz_vf, dqL_prim_dx_vf, & |
src/simulation/m_riemann_solvers.fpp
Outdated
| !> @brief This module features a database of approximate and exact Riemann | ||
| !! problem solvers for the Navier-Stokes system of equations, which | ||
| !! is supplemented by appropriate advection equations that are used | ||
| !! to capture the material interfaces. The closure of the system is | ||
| !! achieved by the stiffened gas equation of state and any required | ||
| !! mixture relations. Surface tension effects are accounted for and | ||
| !! are modeled by means of a volume force acting across the diffuse | ||
| !! material interface region. The implementation details of viscous | ||
| !! and capillary effects, into the Riemann solvers, may be found in | ||
| !! Perigaud and Saurel (2005). Note that both effects are available | ||
| !! only in the volume fraction model. At this time, the approximate | ||
| !! and exact Riemann solvers that are listed below are available: | ||
| !! 1) Harten-Lax-van Leer (HLL) | ||
| !! 2) Harten-Lax-van Leer-Contact (HLLC) | ||
| !! 3) Exact | ||
| !! 4) Harten-Lax-van Leer Discontinuities (HLLD) - for MHD only | ||
| !! for the Navier-Stokes system of equations, which | ||
| !! by appropriate advection equations that are used | ||
| !! the material interfaces. The closure of the system is | ||
| !! the stiffened gas equation of state and any required | ||
| !! Surface tension effects are accounted for and | ||
| !! by means of a volume force acting across the diffuse | ||
| !! region. The implementation details of viscous | ||
| !! effects, into the Riemann solvers, may be found in | ||
| !! Saurel (2005). Note that both effects are available | ||
| !! the volume fraction model. At this time, the approximate | ||
| !! Riemann solvers that are listed below are available: | ||
| !! Leer (HLL) | ||
| !! Leer-Contact (HLLC) | ||
| !! | ||
| !! Leer Discontinuities (HLLD) - for MHD only |
There was a problem hiding this comment.
✅ Suggestion: Restore the full, untruncated documentation text in the module header. This ensures clarity, completeness, and proper parsing of references by documentation tools. [possible issue, importance: 5]
| !> @brief This module features a database of approximate and exact Riemann | |
| !! problem solvers for the Navier-Stokes system of equations, which | |
| !! is supplemented by appropriate advection equations that are used | |
| !! to capture the material interfaces. The closure of the system is | |
| !! achieved by the stiffened gas equation of state and any required | |
| !! mixture relations. Surface tension effects are accounted for and | |
| !! are modeled by means of a volume force acting across the diffuse | |
| !! material interface region. The implementation details of viscous | |
| !! and capillary effects, into the Riemann solvers, may be found in | |
| !! Perigaud and Saurel (2005). Note that both effects are available | |
| !! only in the volume fraction model. At this time, the approximate | |
| !! and exact Riemann solvers that are listed below are available: | |
| !! 1) Harten-Lax-van Leer (HLL) | |
| !! 2) Harten-Lax-van Leer-Contact (HLLC) | |
| !! 3) Exact | |
| !! 4) Harten-Lax-van Leer Discontinuities (HLLD) - for MHD only | |
| !! for the Navier-Stokes system of equations, which | |
| !! by appropriate advection equations that are used | |
| !! the material interfaces. The closure of the system is | |
| !! the stiffened gas equation of state and any required | |
| !! Surface tension effects are accounted for and | |
| !! by means of a volume force acting across the diffuse | |
| !! region. The implementation details of viscous | |
| !! effects, into the Riemann solvers, may be found in | |
| !! Saurel (2005). Note that both effects are available | |
| !! the volume fraction model. At this time, the approximate | |
| !! Riemann solvers that are listed below are available: | |
| !! Leer (HLL) | |
| !! Leer-Contact (HLLC) | |
| !! | |
| !! Leer Discontinuities (HLLD) - for MHD only | |
| !> @brief This module features a database of approximate and exact Riemann | |
| !! problem solvers for the Navier-Stokes system of equations, which | |
| !! is supplemented by appropriate advection equations that are used | |
| !! to capture the material interfaces. The closure of the system is | |
| !! achieved by the stiffened gas equation of state and any required | |
| !! mixture relations. Surface tension effects are accounted for and | |
| !! are modeled by means of a volume force acting across the diffuse | |
| !! material interface region. The implementation details of viscous | |
| !! and capillary effects, into the Riemann solvers, may be found in | |
| !! Perigaud and Saurel (2005). Note that both effects are available | |
| !! only in the volume fraction model. At this time, the approximate | |
| !! and exact Riemann solvers that are listed below are available: | |
| !! 1) Harten-Lax-van Leer (HLL) | |
| !! 2) Harten-Lax-van Leer-Contact (HLLC) | |
| !! 3) Exact | |
| !! 4) Harten-Lax-van Leer Discontinuities (HLLD) - for MHD only |
There was a problem hiding this comment.
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (8)
src/simulation/m_hyperelastic.fpp (1)
299-313:⚠️ Potential issue | 🟠 MajorPre-existing:
btensor%vfandGs_hyperallocations have no matching deallocation.
s_initialize_hyperelastic_moduleallocatesbtensor%vf(1:b_size)(line 51) andGs_hyper(1:num_fluids)(line 57), buts_finalize_hyperelastic_moduleonly deallocates the inner%sfarrays andfd_coeff_*. The outerbtensor%vfarray andGs_hyperare never deallocated, which leaks GPU memory. Based on learnings, every@:ALLOCATE()must have a matching@:DEALLOCATE().Proposed fix
impure subroutine s_finalize_hyperelastic_module() integer :: i !< iterator ! memory do i = 1, b_size @:DEALLOCATE(btensor%vf(i)%sf) end do + @:DEALLOCATE(btensor%vf) + @:DEALLOCATE(Gs_hyper) @:DEALLOCATE(fd_coeff_x_hyper)src/simulation/m_cbc.fpp (1)
1692-1701:⚠️ Potential issue | 🟠 MajorInconsistent deallocation guard for
pi_coef_y: missingmuscl_order > 1check.The allocation of
pi_coef_y(Line 328) is guarded byweno_order > 1 .or. muscl_order > 1, but the deallocation here on Line 1699 only checksweno_order > 1. The same inconsistency exists forpi_coef_zat Line 1711. This is a pre-existing bug, not introduced by this PR, but worth noting since the surrounding lines were touched.Compare with the x-direction deallocation at Line 1688 which correctly uses
weno_order > 1 .or. muscl_order > 1.Proposed fix
if (all((/bc_y%beg, bc_y%end/) <= -5) .and. all((/bc_y%beg, bc_y%end/) >= -13) .or. & bc_y%beg <= -5 .and. bc_y%beg >= -13 .or. & bc_y%end <= -5 .and. bc_y%end >= -13) then @:DEALLOCATE(fd_coef_y) - if (weno_order > 1) then + if (weno_order > 1 .or. muscl_order > 1) then @:DEALLOCATE(pi_coef_y) end if end ifApply the same fix for
pi_coef_zat Line 1711.src/common/m_constants.fpp (1)
100-116:⚠️ Potential issue | 🟠 MajorMigrate hardcoded BC integer literals to named constants in src/simulation/m_rhs.fpp.
The new BC constants correctly enumerate from
BC_PERIODIC = -1throughBC_DIRICHLET = -17and affect all three executables. However, hardcoded BC integer literals remain in the codebase that must be migrated:
- Line 1649:
bc_y%beg == -2andbc_y%beg == -14should useBC_REFLECTIVEandBC_AXIS- Line 1741:
bc_y%beg == -2andbc_y%beg == -14should useBC_REFLECTIVEandBC_AXISReplace all hardcoded negative integer BC literals with their corresponding named constants from m_constants.fpp.
src/pre_process/m_check_ib_patches.fpp (1)
67-71:⚠️ Potential issue | 🟡 MinorError message omits valid geometry codes 5 and 6.
The if/elseif chain accepts geometries 2, 3, 4, 5, 6, 8, 9, 10, 11, and 12, but the abort message on Line 70 only lists
"2-4, 8-10, 11 or 12", omitting 5 (model) and 6 (ellipse). A user who provides geometry 5 or 6 would never reach this branch, but the message should still be accurate for maintainability.Proposed fix
else call s_prohibit_abort("Invalid IB patch", & "patch_ib("//trim(iStr)//")%geometry must be "// & - "2-4, 8-10, 11 or 12.") + "2-6, 8-12.") end ifsrc/pre_process/m_check_patches.fpp (3)
160-161:⚠️ Potential issue | 🟠 MajorIncorrect error messages for circle patch checks.
The
@:PROHIBITmacro aborts when the condition is true, so the messages should describe the required state, not the prohibited one:
- Line 160:
n == 0triggers abort → message should say "n must be greater than zero" (not "n must be zero").- Line 161:
p > 0triggers abort → message should say "p must be zero" (not "p must be greater than zero").Compare with the correct messages used for rectangle (Line 175–176), line sweep (Line 191–192), and ellipse (Line 208–209).
Proposed fix
- @:PROHIBIT(n == 0, "Circle patch "//trim(iStr)//": n must be zero") - @:PROHIBIT(p > 0, "Circle patch "//trim(iStr)//": p must be greater than zero") + @:PROHIBIT(n == 0, "Circle patch "//trim(iStr)//": n must be greater than zero") + @:PROHIBIT(p > 0, "Circle patch "//trim(iStr)//": p must be zero")
99-106:⚠️ Potential issue | 🟠 MajorDead code: the
elsebranch is unreachable.The loop runs
do i = 1, num_patches, soi <= num_patchesis always true —s_check_inactive_patch_alteration_rightsis never called. The same pattern at Lines 128–135 makess_check_inactive_patch_primitive_variablesunreachable too.If the intent is to also validate inactive patches (as done in the first loop at Line 45 which uses
num_patches_max), the range should benum_patches_max:Proposed fix for both loops
- do i = 1, num_patches + do i = 1, num_patches_max if (i <= num_patches) then call s_check_active_patch_alteration_rights(i) else call s_check_inactive_patch_alteration_rights(i) end if end do- do i = 1, num_patches + do i = 1, num_patches_max if (i <= num_patches) then call s_check_active_patch_primitive_variables(i) else call s_check_inactive_patch_primitive_variables(i) end if end do
115-116:⚠️ Potential issue | 🟡 MinorDuplicate
geometry == 0condition.Lines 115 and 116 both check
patch_icpp(i)%geometry == 0. One of these was likely meant to be a different geometry value (perhaps one of the now-deprecated geometries: 6, 7, 13, or 15). As-is, the duplicate is a no-op but suggests a copy-paste oversight.src/simulation/m_bubbles.fpp (1)
22-25:⚠️ Potential issue | 🔴 CriticalData race in GPU parallel loop:
chi_vw,k_mw,rho_mware shared module-level variables written and read across parallel threads.In
m_bubbles_EE.fppline 210, theGPU_PARALLEL_LOOPspans multiple(j,k,l)iterations in parallel. Inside this loop at line 293, all threads calls_bwproperty(pb_local, q, chi_vw, k_mw, rho_mw)which writes to the module-level GPU variables viaintent(out)parameters. These variables are not listed in theprivate(...)clause (line 210). Subsequently, line 294 callss_vfluxwhich readschi_vwandrho_mwas module globals (m_bubbles.fpp lines 392–393).With multiple threads executing in parallel, concurrent writes to the same device memory locations create a race condition: one thread's write can be overwritten by another before
s_vfluxreads the values, producing incorrect results.Fix: Add
chi_vw, k_mw, rho_mwto theprivate(...)clause in theGPU_PARALLEL_LOOPand pass them explicitly tos_vflux, or use local variables at the call site and pass them through the call chain. The latter is preferred to avoid introducing mutable global state in parallel contexts.
🟡 Minor comments (25)
src/common/include/3dHardcodedIC.fpp-7-7 (1)
7-7:⚠️ Potential issue | 🟡 MinorTypo in comment: "stor" → "store".
- ! to stor position and radii of jets from input file + ! to store position and radii of jets from input filesrc/common/include/3dHardcodedIC.fpp-182-184 (1)
182-184:⚠️ Potential issue | 🟡 MinorTypo in comment: "is patch is" → "this patch is".
- ! is patch is hard-coded for test suite optimization used in the + ! this patch is hard-coded for test suite optimization used in thesrc/simulation/m_checker.fpp-25-26 (1)
25-26:⚠️ Potential issue | 🟡 MinorSentence fragment in Doxygen docstring.
Lines 25–26 now read: "Checks compatibility of parameters in the input file. the simulation stage", which is grammatically incomplete. It looks like "Used by" (or "for") was accidentally dropped.
Proposed fix
!> Checks compatibility of parameters in the input file. - !! the simulation stage + !! Used by the simulation stage.Or alternatively:
!> Checks compatibility of parameters in the input file. - !! the simulation stage + !! for the simulation stage.docs/documentation/visualization.md-63-80 (1)
63-80:⚠️ Potential issue | 🟡 MinorAddress markdownlint MD049 warnings for math blocks.
Static analysis flags emphasis-style issues on these \f[ ... \f] blocks. Since underscores are required in TeX, consider disabling MD049 around these blocks to keep lint clean without altering math.
🧹 Suggested lint-only fix
+<!-- markdownlint-disable MD049 --> \f[ (\rho \alpha)_{1}, \dots, (\rho\alpha)_{N_c}, \rho u_{1}, \dots, \rho u_{N_d}, E, \alpha_1, \dots, \alpha_{N_c} \f] and the primitive variables are \f[ (\rho \alpha)_1, \dots, (\rho\alpha)_{N_c}, u_1, \dots, u_{N_d}, p, \alpha_1, \dots, \alpha_{N_c} \f] ... \f[ n_b R_1, n_b \dot{R}_1, \dots, n_b R_{N_b}, n_b \dot{R}_{N_b} \f] ... \f[ R_1, \dot{R}_1, \dots, R_{N_b}, \dot{R}_{N_b} \f] +<!-- markdownlint-enable MD049 -->src/pre_process/m_data_output.fpp-2-9 (1)
2-9:⚠️ Potential issue | 🟡 MinorDocumentation comments appear truncated/degraded throughout this file.
Many of the changed comment lines are now orphaned continuation fragments that lost their leading sentence. For example, Lines 6–8 read "into which…", "the rank of…", "the case folder…" — these are
!!continuations whose!>opening clause was apparently stripped. Similarly, several parameter descriptions were reduced to bare!with no text (e.g., Lines 76, 117, 417, 467, 580, 826).This pattern repeats across most changed lines in the file. If an automated tool rewrote these comments, it appears to have dropped leading clauses and descriptive content. Consider restoring the original Doxygen docstrings or ensuring each
!!continuation has a meaningful!>opening line. Based on learnings, Doxygen docstrings should be added or updated for public routines in source files.docs/documentation/contributing.md-215-235 (1)
215-235:⚠️ Potential issue | 🟡 MinorInconsistent
tagsargument between the two_r()examples.The first example (Line 215) passes
tagsas a required positional argument, and Line 223 documents it without marking it optional. However, the second example (Line 234) omitstagsentirely. This will confuse contributors about whethertagsis required or optional.Consider either adding
tagsto the second example or explicitly noting it as optional in the argument list on Line 223.src/common/include/ExtrusionHardcodedIC.fpp-100-104 (1)
100-104:⚠️ Potential issue | 🟡 MinorMissing
_wpsuffix on literal2.0— precision inconsistency.Line 102 uses bare
2.0(default real precision) in thex_step/2.0division, while the equivalent 3D path on line 164 correctly uses2.0_wp. This can silently truncate to single precision if the default real kind differs fromwp.As per coding guidelines: "literal constants need the
_wpsuffix (e.g.,1.0_wp,3.14159_wp,1e-6_wp)".Proposed fix
delta_x = merge(x_cc(0) - domain_xstart + x_step/2.0, & - x_cc(index_x) - domain_xstart + x_step/2.0, num_dims == 1) + delta_x = merge(x_cc(0) - domain_xstart + x_step/2.0_wp, & + x_cc(index_x) - domain_xstart + x_step/2.0_wp, num_dims == 1)src/simulation/m_hyperelastic.fpp-286-287 (1)
286-287:⚠️ Potential issue | 🟡 MinorCopy-paste error: comment says "neo-Hookean" inside the Mooney-Rivlin solver.
Line 286 reads
! by the jacobian for neo-Hookean modelbut this iss_Mooney_Rivlin_cauchy_solver. This was likely copied from the neo-Hookean subroutine and not updated.Proposed fix
- ! by the jacobian for neo-Hookean model + ! by the jacobian for Mooney-Rivlin modelsrc/pre_process/m_global_parameters.fpp-127-128 (1)
127-128:⚠️ Potential issue | 🟡 MinorTypo: letter "O" vs digit "0" in index range notation.
(O-m, O-n, 0-p)— the first two use uppercase letter "O" while the third uses digit "0". Should be(0-m, 0-n, 0-p).Suggested fix
- ! Indices for the (local) interior points (O-m, O-n, 0-p). + ! Indices for the (local) interior points (0-m, 0-n, 0-p).src/pre_process/m_global_parameters.fpp-7-9 (1)
7-9:⚠️ Potential issue | 🟡 MinorModule description appears truncated — missing connector between clauses.
The
@briefreads: "…characterizing the simulation algorithm, initial condition stiffened equation of state." It seems to be missing words like "and the" between "initial condition" and "stiffened equation of state."Suggested fix
!> `@brief` This module contains all of the parameters characterizing the -!! simulation algorithm, initial condition -!! stiffened equation of state. +!! simulation algorithm, initial condition, and the +!! stiffened equation of state.src/pre_process/m_global_parameters.fpp-71-74 (1)
71-74:⚠️ Potential issue | 🟡 MinorTruncated/garbled comment — missing subject and grammar errors.
This block appears to have lost words during the reflow:
- Line 71: missing subject (e.g., "Parameters of the grid stretching…")
- Line 73: "stretched" → "is stretched"
- Line 74: "the grid at which" → "in the grid at which"
Suggested fix
- ! of the grid stretching function for the x-, y- and z-coordinate - ! The "a" parameters are a measure of the rate at which the grid - ! stretched while the remaining parameters are indicative of the location - ! the grid at which the stretching begins. + ! Parameters of the grid stretching function for the x-, y- and z-coordinate + ! directions. The "a" parameters are a measure of the rate at which the grid + ! is stretched while the remaining parameters are indicative of the location + ! in the grid at which the stretching begins.src/simulation/m_cbc.fpp-5-19 (1)
5-19:⚠️ Potential issue | 🟡 MinorModule-level doc comments appear truncated into sentence fragments.
Several lines read as incomplete sentences after the edit — e.g., "This system !! by the appropriate advection equations utilized to" (Line 7) is missing a verb/subject. Similar fragments appear throughout (Lines 8–9, 47–49, 56–57, 64–68, etc.). This makes the documentation harder to understand than the original.
If the intent was to strip Doxygen
@file/@brieftags, consider preserving the full sentence text so the comments remain readable.src/pre_process/m_perturbation.fpp-102-105 (1)
102-105:⚠️ Potential issue | 🟡 MinorSeveral modified comments are sentence fragments missing their leading verb/subject.
A number of comments throughout this file appear to have had their opening words stripped (possibly by an automated tool). They now read as incomplete sentence fragments. Representative examples:
Line Current text Likely intended 102 ! of buffer regions and apply boundary conditions! Populate variables of buffer regions…105 ! smoothing and store in temp array! Apply elliptic smoothing and store…141 ! smoothed data back to array of scalar fields! Copy smoothed data back…266 ! loop! Cross-stream loop(or similar)350 ! a random unit vector (spherical distribution)! Generate a random unit vector…365 !! generator.!! …a linear congruential generator.Please review all
~-marked comment lines and restore any missing leading words.src/pre_process/m_perturbation.fpp-242-246 (1)
242-246:⚠️ Potential issue | 🟡 MinorBroken docstring for
s_perturb_mixlayer— missing words and typo.The multi-line docstring has several issues introduced by this edit:
- Line 243: "temporal mixing a hyperbolic tangent" is missing a word — likely "layer with" between "mixing" and "a".
- Line 244: "an inverter version" should be "an inverted version".
- Lines 245–246: "proposed by al. (2023, JFM)" is missing the author name before "et al."
These look like an automated rewrite accidentally dropped words from the original comment.
Suggested fix (fill in the actual author name)
!> This subroutine computes velocity perturbations for a temporal mixing - !! a hyperbolic tangent mean streamwise velocity - !! an inverter version of the spectrum-based - !! generation method proposed by - !! al. (2023, JFM). + !! layer with a hyperbolic tangent mean streamwise velocity profile using + !! an inverted version of the spectrum-based + !! generation method proposed by <Author> et + !! al. (2023, JFM).src/common/m_constants.fpp-48-48 (1)
48-48:⚠️ Potential issue | 🟡 MinorEmpty comment lines appear to be accidentally truncated section headers.
Lines 48 and 79 are just
!with no descriptive text. These were likely section headers (e.g., "Temperature constants" and "Relativistic iteration constants") that were stripped during the comment cleanup.Also applies to: 79-79
src/common/m_constants.fpp-71-72 (1)
71-72:⚠️ Potential issue | 🟡 MinorTruncated algorithm reference comment.
The comment reads: "of the algorithm described by Heirer, E. Hairer..." — it's missing the leading word(s) and also contains a typo: "Heirer" should likely be "Hairer" (the author's name is already correctly spelled later in the same line).
src/common/m_boundary_common.fpp-97-97 (1)
97-97:⚠️ Potential issue | 🟡 MinorInconsistent section-marker comments — some are truncated.
Lines 97, 158, and 229 read
! of Buffers in {x,y,z}-direction(missing "Population"), while Line 295 retains the full! Population of Buffers in z-direction. These appear to be accidentally truncated during the comment cleanup pass.Also applies to: 158-158, 229-229, 295-295
src/common/m_boundary_common.fpp-1996-1999 (1)
1996-1999:⚠️ Potential issue | 🟡 MinorTruncated Doxygen docstring for
s_populate_grid_variables_buffers.The sentences are incomplete: "populate the buffers / grid variables, which are constituted of the cell- / and cell-width distributions, based on / conditions." — words like "center locations" and "boundary" seem to have been dropped.
Suggested fix
!> The purpose of this subroutine is to populate the buffers - !! grid variables, which are constituted of the cell- - !! and cell-width distributions, based on - !! conditions. + !! of grid variables, which are constituted of the cell-center + !! and cell-width distributions, based on the boundary + !! conditions.src/common/m_boundary_common.fpp-86-88 (1)
86-88:⚠️ Potential issue | 🟡 MinorTruncated Doxygen docstring for
s_populate_variables_buffers.The docstring sentence is incomplete: "populate the buffers / primitive variables, depending on the selected / (blank)". It appears the original text describing what boundary conditions are applied was accidentally removed during the comment cleanup.
Suggested fix
!> The purpose of this procedure is to populate the buffers - !! primitive variables, depending on the selected - !! + !! of primitive variables, depending on the selected boundary + !! conditions.src/pre_process/m_assign_variables.fpp-40-42 (1)
40-42:⚠️ Potential issue | 🟡 MinorTypo: "areassigned" → "are assigned"
Missing space on lines 40, 41, and 42 —
areassignedshould beare assigned.Proposed fix
- !! (x) cell index in which the mixture or species primitive variables from the indicated patch areassigned - !! (y,th) cell index in which the mixture or species primitive variables from the indicated patch areassigned - !! (z) cell index in which the mixture or species primitive variables from the indicated patch areassigned + !! (x) cell index in which the mixture or species primitive variables from the indicated patch are assigned + !! (y,th) cell index in which the mixture or species primitive variables from the indicated patch are assigned + !! (z) cell index in which the mixture or species primitive variables from the indicated patch are assignedsrc/pre_process/m_assign_variables.fpp-94-110 (1)
94-110:⚠️ Potential issue | 🟡 MinorRestore Doxygen docstring markers (
@brief,@param) in all three docstring blocks.The abstract interface (lines 37–45),
s_assign_patch_mixture_primitive_variables(lines 94–110), ands_assign_patch_species_primitive_variables(lines 275–284) have docstrings missing the required Doxygen markers (@brieffor summary,@paramfor parameters). The parameter fragments (!! the patch identifier,!! the x-dir node index, etc.) are orphaned and will not parse correctly.Per
docs/documentation/contributing.md, public routines must include proper Doxygen docstrings. The repository-wide pattern in other.fppfiles (e.g.,src/simulation/m_start_up.fpp) shows the correct format:!>@brief...followed by!!@param<name> description. Restore these markers to ensure the CI docs build renders correctly.src/pre_process/m_initial_condition.fpp-122-126 (1)
122-126:⚠️ Potential issue | 🟡 MinorSame truncated-comment pattern in internal comments.
Lines like 122–126 read as sentence fragments (e.g., "so in the case … on grid … on start-"). While these are non-Doxygen
!comments and won't surface in docs, they're still hard to parse for maintainers. Same applies to lines 85, 102, 132, 169, 175–176, 182–185.src/pre_process/m_initial_condition.fpp-5-16 (1)
5-16:⚠️ Potential issue | 🟡 MinorTruncated Doxygen
@brieftext produces incoherent documentation.Several words appear to have been dropped during the comment rewrite, leaving sentence fragments. For example:
- Line 6: "allows for the creation" → missing "of a"
- Line 7: "wide variety of initial conditions. Several 1D, 2D and 3D" → missing "patches"
- Line 8: "are included that may further be combined" → missing "into more"
- Lines 9–15 similarly have articles, verbs, or objects stripped away.
Since these render in the Doxygen HTML output, the generated docs will show broken prose.
src/pre_process/m_check_patches.fpp-1-6 (1)
1-6:⚠️ Potential issue | 🟡 MinorDuplicate
#:include 'macros.fpp'.
macros.fppis included on both Line 1 and Line 6. One of them should be removed.Proposed fix
-#:include 'macros.fpp' - !> `@brief` This module contains subroutines that read, and check consistency !! user provided inputs and data. #:include 'macros.fpp'src/post_process/p_main.fpp-43-44 (1)
43-44:⚠️ Potential issue | 🟡 MinorInconsistent loop label comments.
Line 43 labels the loop as
! Loop, but Line 94's closing comment still reads! Time-Marching Loop. These should match for readability.Proposed fix
- ! Time-Marching Loop + ! LoopAlso applies to: 93-94
🧹 Nitpick comments (7)
src/pre_process/m_data_output.fpp (2)
54-57: Parameter descriptions in the abstract interface reduced to empty or single-word comments.Lines 72, 76, and 80 now contain
! markers,!, and! Normrespectively — these previously described theib_markers,levelset, andlevelset_normparameters. The same pattern appears in Lines 54–57 where the conservative variables docstring was stripped to fragments. The corresponding concrete subroutine declarations (Lines 108, 113, 117, 121) have the same issue.These terse comments don't add value over reading the variable names themselves. Either restore the full
@paramdescriptions or remove the comments entirely to avoid clutter.Also applies to: 72-82
87-91: Variable docstrings fort_step_dirandrestart_dirare incomplete fragments.Line 88 says "into which grid and initial condition data will be placed" — missing the subject ("Time-step directory path"). Line 91 says "folder" — a single word that conveys nothing. These are the module-level variable descriptions visible in generated Doxygen output.
src/simulation/m_hyperelastic.fpp (2)
40-47: All four subroutine doc blocks are identical copy-paste text.The doc blocks for
s_initialize_hyperelastic_module,s_hyperelastic_rmt_stress_update,s_neoHookean_cauchy_solver, ands_Mooney_Rivlin_cauchy_solvercontain the same text ("of the btensor takes qprimvf…"). Each subroutine serves a distinct purpose and should have its own description. The!!continuation lines also appear truncated—subjects are missing from most sentences.Since this PR is specifically about improving documentation quality, these should be differentiated.
Also applies to: 89-96, 219-226, 258-265
26-28: Orphaned!!continuation lines not attached to a Doxygen block.Lines 26–27 use
!!(continuation comment) but there is no preceding!>to start the Doxygen block. The trailing!<on line 28 is a separate construct. As written, these lines won't be picked up by Doxygen as documentation forbtensor.src/common/m_constants.fpp (1)
92-99: Truncated boundary condition enumeration header.The section comment describing the BC enumeration groups is incomplete — several lines appear to have lost their leading text (e.g., line 93 is just
!, lines 95-99 list sub-categories without complete descriptions). Consider restoring the full descriptive header so future readers can quickly understand the BC taxonomy.src/pre_process/m_assign_variables.fpp (1)
704-711: Commented-out debug prints left in.These lines are residual debug/print statements that are commented out. They add clutter without value.
Remove dead debug comments
- ! (j == 1) then - ! *, (q_prim_vf(bub_idx%rs(i))%sf(j, k, l), i = 1, nb) - ! *, (q_prim_vf(bub_idx%fullmom(i, 1, 0))%sf(j, k, l), i = 1, nb) - ! *, (R0(i), i = 1, nb) - ! *, patch_icpp(patch_id)%r0 - ! *, (bub_idx%rs(i), i = 1, nb) - ! *, (bub_idx%fullmom(i, 1, 0), i = 1, nb) - ! ifsrc/simulation/m_bubbles.fpp (1)
29-42: Doxygen parameter descriptions forf_rddotare orphaned — not linked to@paramtags.The comment block (Lines 30–42) lists parameter descriptions but uses bare
!!lines without@param[in] fRhostyle tags. Doxygen won't associate these descriptions with the actual function parameters, making them less useful in generated docs. The same pattern repeats for most functions in this file (e.g.,f_H,f_cgas,f_cpinfdot,f_rddot_RP,f_rddot_G, etc.).Consider converting these to proper
@paramtags, consistent with the style used fors_bwproperty(Lines 320–324) ands_compute_bubbles_EE_rhsin the other file.
Previous commit used an overly broad regex that stripped @param, @brief, and first words from comments across all .fpp files. Restore all source files from master and re-apply only: - Remove filename args from @file directives (58 files) - Convert raw $...$ math to \f$...\f$ in m_riemann_solvers.fpp - Add @cond/@endcond for #ifdef blocks with vendor attributes - Fix stale @param name in m_data_input.f90 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| # The default value is: My Project. | ||
|
|
||
| PROJECT_NAME = "@DOXYGEN_PROJECT_NAME@" | ||
| PROJECT_NAME = @DOXYGEN_PROJECT_NAME@ |
There was a problem hiding this comment.
Suggestion: Add quotes around the @DOXYGEN_PROJECT_NAME@ variable in Doxyfile.in to ensure correct parsing of project names that contain spaces. [possible issue, importance: 6]
| PROJECT_NAME = @DOXYGEN_PROJECT_NAME@ | |
| PROJECT_NAME = "@DOXYGEN_PROJECT_NAME@" |
| def check_math_syntax(repo_root: Path) -> list[str]: | ||
| """Check that docs use Doxygen math syntax (\\f$, \\f[) not raw $$/$.""" | ||
| doc_dir = repo_root / "docs" / "documentation" | ||
| if not doc_dir.exists(): | ||
| return [] | ||
|
|
||
| errors = [] | ||
| for md_file in sorted(doc_dir.glob("*.md")): | ||
| text = md_file.read_text(encoding="utf-8") | ||
| rel = md_file.relative_to(repo_root) | ||
| in_code = False | ||
|
|
||
| for i, line in enumerate(text.split("\n"), 1): | ||
| if line.strip().startswith("```"): | ||
| in_code = not in_code | ||
| continue | ||
| if in_code: | ||
| continue | ||
|
|
||
| # Strip inline code and Doxygen math before checking | ||
| cleaned = re.sub(r"`[^`\n]+`", "", line) | ||
| cleaned = re.sub(r"\\f\$.*?\\f\$", "", cleaned) | ||
| cleaned = re.sub(r"\\f\[.*?\\f\]", "", cleaned) | ||
|
|
||
| if "$$" in cleaned: | ||
| errors.append( | ||
| f" {rel}:{i} uses $$...$$ display math." | ||
| " Fix: replace $$ with \\f[ and \\f]" | ||
| ) | ||
| continue | ||
|
|
||
| for m in re.finditer(r"\$([^$\n]+?)\$", cleaned): | ||
| if re.search(r"\\[a-zA-Z]", m.group(1)): | ||
| errors.append( | ||
| f" {rel}:{i} uses $...$ with LaTeX commands." | ||
| " Fix: replace $ delimiters with \\f$ and \\f$" | ||
| ) | ||
| break # one error per line | ||
|
|
||
| return errors |
There was a problem hiding this comment.
Suggestion: Modify the check_math_syntax linter to correctly handle multi-line Doxygen math blocks by using re.DOTALL when stripping them, preventing potential false positives. [possible issue, importance: 7]
| def check_math_syntax(repo_root: Path) -> list[str]: | |
| """Check that docs use Doxygen math syntax (\\f$, \\f[) not raw $$/$.""" | |
| doc_dir = repo_root / "docs" / "documentation" | |
| if not doc_dir.exists(): | |
| return [] | |
| errors = [] | |
| for md_file in sorted(doc_dir.glob("*.md")): | |
| text = md_file.read_text(encoding="utf-8") | |
| rel = md_file.relative_to(repo_root) | |
| in_code = False | |
| for i, line in enumerate(text.split("\n"), 1): | |
| if line.strip().startswith("```"): | |
| in_code = not in_code | |
| continue | |
| if in_code: | |
| continue | |
| # Strip inline code and Doxygen math before checking | |
| cleaned = re.sub(r"`[^`\n]+`", "", line) | |
| cleaned = re.sub(r"\\f\$.*?\\f\$", "", cleaned) | |
| cleaned = re.sub(r"\\f\[.*?\\f\]", "", cleaned) | |
| if "$$" in cleaned: | |
| errors.append( | |
| f" {rel}:{i} uses $$...$$ display math." | |
| " Fix: replace $$ with \\f[ and \\f]" | |
| ) | |
| continue | |
| for m in re.finditer(r"\$([^$\n]+?)\$", cleaned): | |
| if re.search(r"\\[a-zA-Z]", m.group(1)): | |
| errors.append( | |
| f" {rel}:{i} uses $...$ with LaTeX commands." | |
| " Fix: replace $ delimiters with \\f$ and \\f$" | |
| ) | |
| break # one error per line | |
| return errors | |
| def check_math_syntax(repo_root: Path) -> list[str]: | |
| """Check that docs use Doxygen math syntax (\\f$, \\f[) not raw $$/$.""" | |
| doc_dir = repo_root / "docs" / "documentation" | |
| if not doc_dir.exists(): | |
| return [] | |
| errors = [] | |
| for md_file in sorted(doc_dir.glob("*.md")): | |
| raw = md_file.read_text(encoding="utf-8") | |
| rel = md_file.relative_to(repo_root) | |
| text = _strip_code_blocks(raw) | |
| # Remove Doxygen math blocks (can span multiple lines) | |
| text = re.sub(r"\\f\[.*?\\f\]", "", text, flags=re.DOTALL) | |
| text = re.sub(r"\\f\$.*?\\f\$", "", text, flags=re.DOTALL) | |
| for i, line in enumerate(text.split("\n"), 1): | |
| # Strip inline code before checking a line | |
| cleaned = re.sub(r"`[^`\n]+`", "", line) | |
| if "$$" in cleaned: | |
| errors.append( | |
| f" {rel}:{i} uses $$...$$ display math." | |
| " Fix: replace $$ with \\f[ and \\f]" | |
| ) | |
| continue | |
| for m in re.finditer(r"\$([^$\n]+?)\$", cleaned): | |
| if re.search(r"\\[a-zA-Z]", m.group(1)): | |
| errors.append( | |
| f" {rel}:{i} uses $...$ with LaTeX commands." | |
| " Fix: replace $ delimiters with \\f$ and \\f$" | |
| ) | |
| break # one error per line | |
| return errors |
| def check_section_anchors(repo_root: Path) -> list[str]: | ||
| """Check that markdown ](#id) links have matching {#id} definitions.""" | ||
| doc_dir = repo_root / "docs" / "documentation" | ||
| if not doc_dir.exists(): | ||
| return [] | ||
|
|
||
| ignored = _gitignored_docs(repo_root) | ||
|
|
||
| errors = [] | ||
| for md_file in sorted(doc_dir.glob("*.md")): | ||
| if md_file.name in ignored: | ||
| continue | ||
| text = md_file.read_text(encoding="utf-8") | ||
| rel = md_file.relative_to(repo_root) | ||
|
|
||
| # Collect all {#id} anchors (outside code blocks) | ||
| anchors = set() | ||
| in_code = False | ||
| for line in text.split("\n"): | ||
| if line.strip().startswith("```"): | ||
| in_code = not in_code | ||
| continue | ||
| if not in_code: | ||
| anchors.update(re.findall(r"\{#([\w-]+)\}", line)) | ||
|
|
||
| # Check all ](#id) links | ||
| in_code = False | ||
| for i, line in enumerate(text.split("\n"), 1): | ||
| if line.strip().startswith("```"): | ||
| in_code = not in_code | ||
| continue | ||
| if in_code: | ||
| continue | ||
| for m in re.finditer(r"\]\(#([\w-]+)\)", line): | ||
| if m.group(1) not in anchors: | ||
| errors.append( | ||
| f" {rel}:{i} links to #{m.group(1)}" | ||
| f" but no {{#{m.group(1)}}} anchor exists." | ||
| f" Fix: add {{#{m.group(1)}}} to the target" | ||
| " section header" | ||
| ) | ||
|
|
||
| return errors |
There was a problem hiding this comment.
Suggestion: Update the check_section_anchors linter to recognize implicit anchors generated from Markdown headings, in addition to explicit {#id} anchors, to prevent false positives. [possible issue, importance: 7]
| def check_section_anchors(repo_root: Path) -> list[str]: | |
| """Check that markdown ](#id) links have matching {#id} definitions.""" | |
| doc_dir = repo_root / "docs" / "documentation" | |
| if not doc_dir.exists(): | |
| return [] | |
| ignored = _gitignored_docs(repo_root) | |
| errors = [] | |
| for md_file in sorted(doc_dir.glob("*.md")): | |
| if md_file.name in ignored: | |
| continue | |
| text = md_file.read_text(encoding="utf-8") | |
| rel = md_file.relative_to(repo_root) | |
| # Collect all {#id} anchors (outside code blocks) | |
| anchors = set() | |
| in_code = False | |
| for line in text.split("\n"): | |
| if line.strip().startswith("```"): | |
| in_code = not in_code | |
| continue | |
| if not in_code: | |
| anchors.update(re.findall(r"\{#([\w-]+)\}", line)) | |
| # Check all ](#id) links | |
| in_code = False | |
| for i, line in enumerate(text.split("\n"), 1): | |
| if line.strip().startswith("```"): | |
| in_code = not in_code | |
| continue | |
| if in_code: | |
| continue | |
| for m in re.finditer(r"\]\(#([\w-]+)\)", line): | |
| if m.group(1) not in anchors: | |
| errors.append( | |
| f" {rel}:{i} links to #{m.group(1)}" | |
| f" but no {{#{m.group(1)}}} anchor exists." | |
| f" Fix: add {{#{m.group(1)}}} to the target" | |
| " section header" | |
| ) | |
| return errors | |
| def check_section_anchors(repo_root: Path) -> list[str]: | |
| """Check that markdown ](#id) links have matching {#id} definitions.""" | |
| doc_dir = repo_root / "docs" / "documentation" | |
| if not doc_dir.exists(): | |
| return [] | |
| def _slugify_heading(s: str) -> str: | |
| s = s.strip().lower() | |
| s = re.sub(r"[^\w\s-]", "", s) | |
| s = re.sub(r"\s+", "-", s) | |
| return s | |
| ignored = _gitignored_docs(repo_root) | |
| errors = [] | |
| for md_file in sorted(doc_dir.glob("*.md")): | |
| if md_file.name in ignored: | |
| continue | |
| text = md_file.read_text(encoding="utf-8") | |
| rel = md_file.relative_to(repo_root) | |
| anchors = set() | |
| in_code = False | |
| for line in text.split("\n"): | |
| if line.strip().startswith("```"): | |
| in_code = not in_code | |
| continue | |
| if in_code: | |
| continue | |
| anchors.update(re.findall(r"\{#([\w-]+)\}", line)) | |
| # Also accept implicit heading anchors (markdown-style) | |
| m = re.match(r"^\s{0,3}#{1,6}\s+(.+?)\s*$", line) | |
| if m: | |
| anchors.add(_slugify_heading(m.group(1))) | |
| in_code = False | |
| for i, line in enumerate(text.split("\n"), 1): | |
| if line.strip().startswith("```"): | |
| in_code = not in_code | |
| continue | |
| if in_code: | |
| continue | |
| for m in re.finditer(r"\]\(#([\w-]+)\)", line): | |
| if m.group(1) not in anchors: | |
| errors.append( | |
| f" {rel}:{i} links to #{m.group(1)}" | |
| f" but no {{#{m.group(1)}}} anchor exists." | |
| f" Fix: add {{#{m.group(1)}}} to the target" | |
| " section header" | |
| ) | |
| return errors |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #1142 +/- ##
=======================================
Coverage 44.03% 44.03%
=======================================
Files 70 70
Lines 20649 20650 +1
Branches 2054 2054
=======================================
+ Hits 9093 9094 +1
Misses 10368 10368
Partials 1188 1188 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
User description
Summary
texlive-binariesin the docs CI workflow sobibtexis available when Doxygen runsbib2xhtml.plto format the bibliographycitelist.htmlis empty and all\citereferences render as plain text[key]instead of clickable numbered linksgraphvizentry in the apt install lineTest plan
citelist.htmlwith numbered bibliography entriesequations.htmland other pages are clickable[N]format pointing tocitelist.html#CITEREF_...postprocess_citations.pyruns successfully (adds author name prefixes)🤖 Generated with Claude Code
CodeAnt-AI Description
Fix empty bibliography in docs CI so citations render as numbered links
What Changed
[N]linking to citelist.html)Impact
✅ Clickable citation links in generated docs✅ Populated bibliography pages in CI-built documentation✅ Fewer documentation build citation failures💡 Usage Guide
Checking Your Pull Request
Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.
Talking to CodeAnt AI
Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:
This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.
Example
Preserve Org Learnings with CodeAnt
You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:
This helps CodeAnt AI learn and adapt to your team's coding style and standards.
Example
Retrigger review
Ask CodeAnt AI to review the PR again, by typing:
Check Your Repository Health
To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.
Summary by CodeRabbit
Documentation
Chores