Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 22 additions & 22 deletions tutorials/euclid/1_Euclid_intro_MER_images.md
Original file line number Diff line number Diff line change
Expand Up @@ -226,58 +226,58 @@ science_images['filters'][science_images['filters']== 'VIS_VIS'] = "VIS"
science_images['filters']
```

## The image above is very large, so let's cut out a smaller image to inspect these data.
## 4. Define cutout parameters for a smaller region of interest

```{code-cell} ipython3
######################## User defined section ############################
## How large do you want the image cutout to be?
# Set the image cutout size
im_cutout = 1.0 * u.arcmin

## What is the center of the cutout?
## For now choosing a random location on the image
## because the star itself is saturated
# Set the cutout center coordinates
# For now choose a random location on the image
# because the star itself is saturated
ra = 273.8667
dec = 64.525

## Bright star position
# Bright star position
# ra = 273.474451
# dec = 64.397273

coords_cutout = SkyCoord(ra, dec, unit='deg', frame='icrs')

##########################################################################

## Iterate through each filter
# Iterate through each filter

cutout_list = []

for url in urls:
## Use fsspec to interact with the fits file without downloading the full file
# Use fsspec to interact with the fits file without downloading the full file
hdu = fits.open(url, use_fsspec=True)
print(f"Opened {url}")

## Store the header
# Store the header
header = hdu[0].header

## Read in the cutout of the image that you want
# Read in the cutout of the image that you want
cutout_data = Cutout2D(hdu[0].section, position=coords_cutout, size=im_cutout, wcs=WCS(hdu[0].header))

## Close the file
# Close the file
# hdu.close()

## Define a new fits file based on this smaller cutout, with accurate WCS based on the cutout size
# Define a new fits file based on this smaller cutout, with accurate WCS based on the cutout size
new_hdu = fits.PrimaryHDU(data=cutout_data.data, header=header)
new_hdu.header.update(cutout_data.wcs.to_header())

## Append the cutout to the list
# Append the cutout to the list
cutout_list.append(new_hdu)

## Combine all cutouts into a single HDUList and display information
# Combine all cutouts into a single HDUList and display information
final_hdulist = fits.HDUList(cutout_list)
final_hdulist.info()
```

## 3. Visualize multiwavelength Euclid Q1 MER cutouts
## 5. Visualize multiwavelength Euclid Q1 MER cutouts

We need to determine the number of images for the grid layout, and then plot each cutout.

Expand All @@ -297,15 +297,15 @@ for idx, (ax, filt) in enumerate(zip(axes, science_images['filters'])):
ax.set_ylabel('Dec')
ax.text(0.05, 0.05, filt, color='white', fontsize=14, transform=ax.transAxes, va='bottom', ha='left')

## Remove empty subplots if any
# Remove empty subplots if any
for ax in axes[num_images:]:
fig.delaxes(ax)

plt.tight_layout()
plt.show()
```

## 4. Use the Python package sep to identify and measure sources in the Euclid Q1 MER cutouts
## 6. Identify and measure sources in Euclid Q1 MER cutouts with sep

First we list all the filters so you can choose which cutout you want to extract sources on. We will choose VIS.

Expand Down Expand Up @@ -354,13 +354,13 @@ data_sub = img2 - bkg
```{code-cell} ipython3
######################## User defined section ############################

## Sigma threshold to consider this a detection above the global RMS
# Sigma threshold to consider this a detection above the global RMS
threshold= 3

## Minimum number of pixels required for an object. Default is 5.
# Minimum number of pixels required for an object. Default is 5.
minarea_0=2

## Minimum contrast ratio used for object deblending. Default is 0.005. To entirely disable deblending, set to 1.0.
# Minimum contrast ratio used for object deblending. Default is 0.005. To entirely disable deblending, set to 1.0.
deblend_cont_0= 0.005

flux_threshold= 0.01
Expand All @@ -372,7 +372,7 @@ sources_thr = sources[sources['flux'] > flux_threshold]
print("Found", len(sources_thr), "objects above flux threshold")
```

## Lets have a look at the objects that were detected with sep in the cutout
## 7. Review detected sources on the VIS cutout


We plot the VIS cutout with the sources detected overplotted with a red ellipse
Expand All @@ -382,7 +382,7 @@ fig, ax = plt.subplots()
m, s = np.mean(data_sub), np.std(data_sub)
im = ax.imshow(data_sub, cmap='gray', origin='lower', norm=ImageNormalize(img2, interval=ZScaleInterval(), stretch=SquaredStretch()))

## Plot an ellipse for each object detected with sep
# Plot an ellipse for each object detected with sep

for i in range(len(sources_thr)):
e = Ellipse(xy=(sources_thr['x'][i], sources_thr['y'][i]),
Expand Down
20 changes: 10 additions & 10 deletions tutorials/euclid/4_Euclid_intro_PHZ_catalog.md
Original file line number Diff line number Diff line change
Expand Up @@ -186,10 +186,10 @@ Search based on ``tileID``:

```{code-cell} ipython3
######################## User defined section ############################
## How large do you want the image cutout to be?
# Set the image cutout size
im_cutout= 5 * u.arcmin

## What is the center of the cutout?
# Set the center of the cutout
ra_cutout = 267.8
dec_cutout = 66

Expand All @@ -215,7 +215,7 @@ adql = ("SELECT DISTINCT mer.object_id, mer.ra, mer.dec, "
"AND phz.phz_median BETWEEN 1.4 AND 1.6")


## Use TAP with this ADQL string
# Use TAP with this ADQL string
result_galaxies = Irsa.query_tap(adql).to_table()
result_galaxies[:5]
```
Expand All @@ -235,13 +235,13 @@ Once the bug is fixed, we plan to update the code in this notebook and simplify
Due to the large field of view of the MER mosaic, let's cut out a smaller section (5'x5') of the MER mosaic to inspect the image.

```{code-cell} ipython3
## Use fsspec to interact with the fits file without downloading the full file
# Use fsspec to interact with the fits file without downloading the full file
hdu = fits.open(filename, use_fsspec=True)

## Store the header
# Store the header
header = hdu[0].header

## Read in the cutout of the image that you want
# Read in the cutout of the image that you want
cutout_image = Cutout2D(hdu[0].section, position=coords_cutout, size=im_cutout, wcs=WCS(header))
```

Expand Down Expand Up @@ -275,7 +275,7 @@ plt.scatter(result_galaxies['ra'], result_galaxies['dec'], s=36, facecolors='non
_ = plt.title('Galaxies between z = 1.4 and 1.6')
```

## 5. Pull the spectra on the top brightest source based on object ID
## 5. Pull spectra for one of the brightest sources by object ID

```{code-cell} ipython3
result_galaxies.sort(keys='flux_h_unif', reverse=True)
Expand All @@ -299,7 +299,7 @@ We will use TAP and an ASQL query to find the spectral data for this particular
```{code-cell} ipython3
adql_object = f"SELECT * FROM {table_1dspectra} WHERE objectid = {obj_id}"

## Pull the data on this particular galaxy
# Pull the data on this particular galaxy
result_spectra = Irsa.query_tap(adql_object).to_table()
result_spectra
```
Expand Down Expand Up @@ -340,7 +340,7 @@ result_galaxies[index]
```

```{code-cell} ipython3
## How large do you want the image cutout to be?
# Set the image cutout size for the selected galaxy
size_galaxy_cutout = 2.0 * u.arcsec
```

Expand Down Expand Up @@ -369,7 +369,7 @@ ax.imshow(cutout_galaxy.data, cmap='gray', origin='lower',
norm=ImageNormalize(cutout_galaxy.data, interval=PercentileInterval(99.9), stretch=AsinhStretch()))
```

## 6. Load the image on Firefly to be able to interact with the data directly
## 6. Load the image in Firefly for interactive exploration

+++

Expand Down
2 changes: 1 addition & 1 deletion tutorials/euclid/5_Euclid_intro_SPE_catalog.md
Original file line number Diff line number Diff line change
Expand Up @@ -172,7 +172,7 @@ Irsa.list_columns(catalog=table_1dspectra, full=True)
columns_info
```

## Find some objects with spectra in our tileID
## 3. Find some objects with spectra in our tileID

We specify the following conditions on our search:
- Signal to noise ratio column (_gf = gaussian fit) should be greater than 5
Expand Down
24 changes: 12 additions & 12 deletions tutorials/euclid/Euclid_ERO.md
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file got messed up. Numbers are being added to some of the code comments as if they were headings and the actual heading numbers aren't sequential.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

got it.

Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ import matplotlib as mpl
Next, we define some parameters for `Matplotlib` plotting.

```{code-cell} ipython3
## Plotting stuff
# Plotting stuff
mpl.rcParams['font.size'] = 14
mpl.rcParams['axes.labelpad'] = 7
mpl.rcParams['xtick.major.pad'] = 7
Expand All @@ -123,7 +123,7 @@ mpl.rcParams['hatch.linewidth'] = 1
def_cols = plt.rcParams['axes.prop_cycle'].by_key()['color']
```

## Setting up the Environment
## 1. Setting up the Environment

Next, we set up the environment. This includes
* setting up an output data directory (will be created if it does not exist)
Expand All @@ -148,7 +148,7 @@ cutout_size = 1.5 * u.arcmin # cutout size
coord = SkyCoord.from_name('NGC 6397')
```

## Search Euclid ERO Images
## 2. Search Euclid ERO Images

Now, we search for the Euclid ERO images using the `astroquery` package.
Note that the Euclid ERO images are no in the cloud currently, but we access them directly from IRSA using IRSA's *Simple Image Access* (SIA) methods.
Expand Down Expand Up @@ -227,7 +227,7 @@ Let's check out the summary table that we have created. We see that we have all
summary_table
```

## Create Cutout Images
## 3. Create Cutout Images

Now that we have a list of data products, we can create the cutouts. This is important as the full Euclid ERO images would be too large to run extraction and photometry software on them (they would simply fail due to memory issues).

Expand Down Expand Up @@ -265,7 +265,7 @@ for ii,filt in tqdm(enumerate(filters)):
hdu.header["FILTER"] = filt.upper()
hdulcutout.append(hdu)

## Save the HDUL cube:
# Save the HDUL cube:
hdulcutout.writeto("./data/euclid_images_test.fits", overwrite=True)
```

Expand Down Expand Up @@ -294,7 +294,7 @@ for ii,filt in enumerate(filters):
plt.show()
```

## Extract Sources and Measure their Photometry on the VIS Image
## 4. Extract Sources and Measure their Photometry on the VIS Image

Now that we have the images in memory (and on disk - but we do not need them, yet), we can measure the fluxes of the individual stars.
Our simple photometry pipeline has different parts:
Expand All @@ -308,7 +308,7 @@ Our simple photometry pipeline has different parts:
We start by extracting the sources using `sep`. We first isolate the data that we want to look at (the VIS image only).

```{code-cell} ipython3
## Get Data (this will be replaced later)
# Get Data (this will be replaced later)
img = hdulcutout["VIS_SCIENCE"].data
hdr = hdulcutout["VIS_SCIENCE"].header
img[img == 0] = np.nan
Expand Down Expand Up @@ -378,7 +378,7 @@ resimage = psfphot.make_residual_image(data = img-median, psf_shape = (9, 9))
We now want to add the best-fit coordinates (R.A. and Decl.) to the VIS photometry catalog. For this, we have to convert the image coordinates into sky coordinates using the WCS information. We will need these coordinates because we want to use them as positional priors for the photometry measurement on the NISP images.

```{code-cell} ipython3
## Add coordinates to catalog
# Add coordinates to catalog
wcs1 = WCS(hdr) # VIS
radec = wcs1.all_pix2world(phot["x_fit"],phot["y_fit"],0)
phot["ra_fit"] = radec[0]
Expand Down Expand Up @@ -424,7 +424,7 @@ ax1.set_yscale('log')
plt.show()
```

## Measure the Photometry on the NISP Images
## 5. Measure the Photometry on the NISP Images

We now have the photometry and the position of sources on the VIS image. We can now proceed with similar steps on the NISP images. Because the NISP PSF and pixel scale are larger that those of the VIS images, we utilize the advantage of position prior-based forced photometry.
For this, we use the positions of the VIS measurements and perform PSF fitting on the NISP image using these priors.
Expand Down Expand Up @@ -508,7 +508,7 @@ ax2.plot(phot2["x_fit"], phot2["y_fit"] , "o", markersize=8 , markeredgecolor="r
plt.show()
```

## Load Gaia Catalog
## 6. Load Gaia Catalog

We now load the Gaia sources at the location of the globular clusters. The goal is to compare the photometry of Gaia to the one derived above for the Euclid VIS and NISP images. This is scientifically useful, for example we can compute the colors of the stars in the Gaia optical bands and the Euclid near-IR bands.
To search for Gaia sources, we use `astroquery` again.
Expand Down Expand Up @@ -563,7 +563,7 @@ ax2.set_title("NISP")
plt.show()
```

## Match the Gaia Catalog to the VIS and NISP Catalogs
## 7. Match the Gaia Catalog to the VIS and NISP Catalogs

Now, we match the Gaia source positions to the extracted sources in the VIS and NISP images.

Expand Down Expand Up @@ -628,7 +628,7 @@ ax1.set_ylabel("I$_E$ [mag]")
plt.show()
```

## Visualization with Firefly
## 8. Visualization with Firefly

At the end of this Notebook, we demonstrate how we can visualize the images and catalogs created above in `Firefly`.

Expand Down
16 changes: 8 additions & 8 deletions tutorials/simulated-data/OpenUniverse2024Preview_Firefly.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ from reproject import reproject_interp
from io import BytesIO
```

## Learn where the OpenUniverse2024 data are hosted in the cloud.
## 1. Learn where the OpenUniverse2024 data are hosted in the cloud

The OpenUniverse2024 data preview is hosted in the cloud via Amazon Web Services (AWS). To access these data, you need to create a client to read data from Amazon's Simple Storage Service (s3) buckets, and you need to know some information about those buckets. The OpenUniverse2024 data preview contains simulations of the Roman Wide-Area Survey (WAS) and the Roman Time Domain Survey (TDS). In this tutorial, we will focus on the WAS.

Expand All @@ -100,7 +100,7 @@ RUBIN_PREFIX = "openuniverse2024/rubin/preview"
RUBIN_COADD_PATH = f"{RUBIN_PREFIX}/u/descdm/preview_data_step3_2877_19_w_2024_12/20240403T150003Z/deepCoadd_calexp/2877/19"
```

## Roman Coadds
## 2. Roman Coadds

The Nancy Grace Roman Space Telescope will carry out a wide-area survey (WAS) in the near infrared. The OpenUniverse2024 data preview includes coadded mosaics of simulated WAS data, created with the IMCOM algorithm (Rowe et al. 2011). Bands include F184, H158, J129, K213, Y106. In this section, we define some functions that make it convenient to retrieve a given cloud-hosted simulated Roman coadd based on position and filter.

Expand Down Expand Up @@ -237,7 +237,7 @@ plt.imshow(coadd_roman['data'], origin='lower',
plt.plot(*coord_arr_idx, 'r+', markersize=15)
```

## Rubin Coadds
## 3. Rubin Coadds

The OpenUniverse2024 data preview includes coadded mosaics in the following filters: u, g, r, i, z, y. In this section, we define some functions that make it convenient to retrieve a given cloud-hosted simulated Roman coadd based on position and filter.

Expand Down Expand Up @@ -327,7 +327,7 @@ coadd_s3_fpath_rubin = get_rubin_coadd_fpath(filter_rubin)
https_url(coadd_s3_fpath_rubin)
```

## Compare simulated Roman and Rubin cutouts for a selected position
## 4. Compare simulated Roman and Rubin cutouts for a selected position

+++

Expand Down Expand Up @@ -366,7 +366,7 @@ fig.suptitle(f"Cutouts at ({coord.ra}, {coord.dec}) with {cutout_size} size", fo
plt.tight_layout(rect=[0, 0, 1, 0.97])
```

## Use Firefly to interactively identify a blended source
## 5. Use Firefly to interactively identify a blended source

Clearly, the simulated Roman coadd has higher spatial resolution than the Rubin simulated coadd. Let's try to locate blended objects to compare in the simulated Rubin and Roman images. We will use Firefly's interactive visualization to make this task easier.

Expand Down Expand Up @@ -452,7 +452,7 @@ point_region = f'icrs;point {coords_of_interest.ra.value}d {coords_of_interest.d
fc.add_region_data(region_data=point_region, region_layer_id=roman_regions_id)
```

## Plot cutouts of the identified blended source
## 6. Plot cutouts of the identified blended source

```{code-cell} ipython3
coadd_roman = get_roman_coadd(coords_of_interest, filter_roman)
Expand Down Expand Up @@ -501,7 +501,7 @@ plt.tight_layout(rect=[0, 0, 1, 0.97])
# plt.savefig("plot.pdf", bbox_inches='tight', pad_inches=0.2)
```

## Use Firefly to visualize the OpenUniverse2024 data preview catalogs
## 7. Use Firefly to visualize the OpenUniverse2024 data preview catalogs
Let's inspect the properties of sources in the Rubin coadd image. For this we will use the input truth files present in S3 bucket.

The OpenUniverse2024 data preview includes the input truth files that were used to create the simulated images. These files are in Parquet and HDF5 format, and include information about the properties of galaxies, stars, and transients.
Expand Down Expand Up @@ -614,7 +614,7 @@ point_region = f'icrs;point {high_z_gal_coords.ra.value}d {high_z_gal_coords.dec
fc.add_region_data(region_data=point_region, region_layer_id=roman_regions_id)
```

## Plot 3-color Roman coadd containing your region of interest
## 8. Plot 3-color Roman coadd containing your region of interest
Let's inspect WCS of Roman coadd first

```{code-cell} ipython3
Expand Down
Loading