Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 40 additions & 2 deletions tutorials/spherex/spherex_cutouts.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,9 @@ The following packages must be installed to run this notebook.

```{code-cell} ipython3
import concurrent.futures
import http.client
import time
import urllib.error

import astropy.units as u
import matplotlib.pyplot as plt
Expand Down Expand Up @@ -179,6 +181,35 @@ def process_cutout(row, ra, dec, cache):
row["hdus"] = hdus
```

We provide a small convenience wrapper around `process_cutout` that is used in the rest of this notebook.
It catches transient read errors and simply skips those cutouts, which is sufficient for this demonstration.
For science use cases, users may instead want to implement their own retry logic or error handling strategy.

```{code-cell} ipython3
def process_cutout_with_error_handling(row, ra, dec, cache):
'''
Call `process_cutout` while catching transient read errors.

Parameters:
===========

row : astropy.table row
Row of a table that will be changed in place by this function. The table
is created by the SQL TAP query.
ra,dec : coordinates (astropy units)
Ra and Dec coordinates (same as used for the TAP query) with attached astropy units
cache : bool
If set to `True`, the output of cached and the cutout processing will run faster next time.
Turn this feature off by setting `cache = False`.
'''
try:
process_cutout(row, ra, dec, cache=cache)
# IncompleteRead: https://github.com/Caltech-IPAC/irsa-tutorials/issues/165#issuecomment-3821504954
except (urllib.error.HTTPError, http.client.IncompleteRead):
# Transient read errors. Skip this cutout.
row["hdus"] = None
```

## 7. Download the Cutouts

This process can take a while.
Expand Down Expand Up @@ -215,8 +246,11 @@ results_table_serial["hdus"] = np.full(len(results_table_serial), None)

t1 = time.time()
for row in results_table_serial:
process_cutout(row, ra, dec, cache=False)
process_cutout_with_error_handling(row, ra, dec, cache=False)
print("Time to create cutouts in serial mode: {:2.2f} minutes.".format((time.time() - t1) / 60))

# Drop rows that failed to download.
results_table_serial = results_table_serial[[r["hdus"] is not None for r in results_table_serial]]
```

### 7.2 Parallel Approach
Expand Down Expand Up @@ -249,9 +283,13 @@ results_table_parallel["hdus"] = np.full(len(results_table_parallel), None)

t1 = time.time()
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
futures = [executor.submit(process_cutout, row, ra, dec, False) for row in results_table_parallel]
futures = [executor.submit(process_cutout_with_error_handling, row, ra, dec, False)
for row in results_table_parallel]
concurrent.futures.wait(futures)
print("Time to create cutouts in parallel mode: {:2.2f} minutes.".format((time.time() - t1) / 60))

# Drop rows that failed to download.
results_table_parallel = results_table_parallel[[r["hdus"] is not None for r in results_table_parallel]]
```

## 8. Create a summary table HDU with renamed columns
Expand Down
15 changes: 14 additions & 1 deletion tutorials/spherex/spherex_intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,10 @@ The following packages must be installed to run this notebook.
## 4. Imports

```{code-cell} ipython3
import http.client
import time
import urllib.error

import numpy as np
import matplotlib.pyplot as plt

Expand Down Expand Up @@ -146,7 +150,16 @@ You can put this URL into a browser to download the file. Or you can work with i
Use Astropy to examine the header of the URL from the previous step.

```{code-cell} ipython3
hdulist = fits.open(spectral_image_url)
# Max number of times to retry HTTP errors.
max_retries = 3
for attempt in range(max_retries):
try:
hdulist = fits.open(spectral_image_url)
break
except (urllib.error.HTTPError, http.client.IncompleteRead):
if attempt == max_retries - 1:
raise
time.sleep(10 * (attempt + 1))
hdulist.info()
```

Expand Down
29 changes: 20 additions & 9 deletions tutorials/spherex/spherex_psf.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,8 +49,10 @@ The following packages must be installed to run this notebook.
```

```{code-cell} ipython3
import http.client
import re
import time
import urllib.error

import astropy.units as u
import matplotlib.pyplot as plt
Expand Down Expand Up @@ -133,12 +135,21 @@ As we do below, you can use `hdul.info()` to print the list of FITS layers of th
```

```{code-cell} ipython3
with fits.open(spectral_image_url) as hdul:
hdul.info()
cutout_header = hdul['IMAGE'].header
psf_header = hdul['PSF'].header
cutout = hdul['IMAGE'].data
psfcube = hdul['PSF'].data
# Max number of times to retry transient read errors.
max_retries = 3
for attempt in range(max_retries):
try:
with fits.open(spectral_image_url) as hdul:
hdul.info()
cutout_header = hdul['IMAGE'].header
psf_header = hdul['PSF'].header
cutout = hdul['IMAGE'].data
psfcube = hdul['PSF'].data
break
except (urllib.error.HTTPError, http.client.IncompleteRead):
if attempt == max_retries - 1:
raise
time.sleep(10 * (attempt + 1))
```

The downloaded SPHEREx image cutout contains 5 FITS layers, which are described in the [SPHEREx Explanatory Supplement](https://irsa.ipac.caltech.edu/data/SPHEREx/docs/SPHEREx_Expsupp_QR.pdf).
Expand Down Expand Up @@ -273,11 +284,11 @@ plt.show()

## 9. Using the SPHEREx PSF in Forward Modeling (e.g., Tractor)

The PSF returned by this notebook is oversampled relative to the native SPHEREx detector pixel grid.
The PSF returned by this notebook is oversampled relative to the native SPHEREx detector pixel grid.
This is intentional: the PSF is evaluated on a fine sub-pixel grid so that it can represent different intra-pixel source positions accurately.

Tools such as Tractor do not expect an oversampled PSF directly.
Instead, they require a PSF that is pixel-integrated at the native detector resolution and evaluated at the correct sub-pixel phase of the source.
Tools such as Tractor do not expect an oversampled PSF directly.
Instead, they require a PSF that is pixel-integrated at the native detector resolution and evaluated at the correct sub-pixel phase of the source.
If you pass the oversampled PSF directly into Tractor without resampling, the effective PSF width and normalization will be incorrect, which can lead to systematic differences relative to the SPHEREx Spectrophotometry Tool.

To use this PSF for forward modeling or fitting, you must:
Expand Down
Loading