Both sides previous revision Previous revision Next revision | Previous revision |
en:praktikum:photometrie_python [2024/06/29 12:28] – jmkubat | en:praktikum:photometrie_python [2025/04/01 11:16] (current) – [Defining some variables] rhainich |
---|
workon ost_photometry | workon ost_photometry |
| |
This is also necessary if you reconnect to a12, e.g. after a break, and want to continue the data analysis. | This is also necessary if you reconnect to columba, e.g. after a break, and want to continue the data analysis. |
| |
The OST photometry pipeline can then be installed in the terminal using //pip// as follows | The OST photometry pipeline can then be installed in the terminal using //pip// as follows |
In order to cope with a larger amount of data, there is a //Python// routine that performs the corrections for darkframe and flatfield per filter, then adds up the images per filter and aligns them to each other. The routine does not perform any quality control of the images, so unusable observations must be sorted out beforehand, otherwise alignment problems may occur. | In order to cope with a larger amount of data, there is a //Python// routine that performs the corrections for darkframe and flatfield per filter, then adds up the images per filter and aligns them to each other. The routine does not perform any quality control of the images, so unusable observations must be sorted out beforehand, otherwise alignment problems may occur. |
| |
Copy the //Python// script ''1_add_images.py'' from the directory ''~/scripts/n2/'' into your local working directory. After that you should open it with a text editor of your choice to adjust the paths for the images. To be able to read and verify a larger amount of images, the program expects a separation of the data into different subdirectories (variables: ''bias'', ''darks'', ''flats'', ''imgs''). There should be one directory each for the images of the star cluster, the flatfields, and the dark frames. A possible directory structure could be: | Copy the //Python// script ''1_add_images.py'' from the directory ''~/scripts/n2/'' into your local working directory. After that you should open it with a text editor of your choice to adjust the paths for the images. To be able to read and verify a larger amount of images, the program expects a separation of the data into different subdirectories (variables: ''bias'', ''darks'', ''flats'', ''images''). There should be one directory each for the images of the star cluster, the flatfields, and the dark frames. A possible directory structure could be: |
| |
/bias/ | /bias/ |
/darks/ | /darks/ |
/flats/ | /flats/ |
/imgs/ | /images/ |
| |
The //Python// script automatically detects the filters and exposure times used. Based on this, it arranges and classifies the files automatically without any further interaction. If you are sure that all ''FIT-Header'' keywords are set correctly, you can try to put all files into one directory. In this case only the path ''rawfiles'' must be set in the script. Otherwise, the paths to the subfolders for the flats, darks, etc. must be specified. Hence either ''bias'', ''darks'', ''flats'', and ''imgs'' musst be specified or only ''rawfiles''. | The //Python// script automatically detects the filters and exposure times used. Based on this, it arranges and classifies the files automatically without any further interaction. If you are sure that all ''FIT-Header'' keywords are set correctly, you can try to put all files into one directory. In this case only the path ''rawfiles'' must be set in the script. Otherwise, the paths to the subfolders for the flats, darks, etc. must be specified. Hence either ''bias'', ''darks'', ''flats'', and ''images'' musst be specified or only ''raw_files''. |
| |
/* | /* |
########################## Individual folders ############################ | ########################## Individual folders ############################ |
### Path to the bias -- If set to '?', bias exposures are not used. | ### Path to the bias -- If set to '?', bias exposures are not used. |
bias = '?' | bias: str = '?' |
| |
### Path to the darks | ### Path to the darks |
darks = '?' | darks: str = '?' |
| |
### Path to the flats | ### Path to the flats |
flats = '?' | flats: str = '?' |
| |
### Path to the images | ### Path to the images |
imgs = '?' | images: str = '?' |
| |
####################### Simple folder structure ########################## | ####################### Simple folder structure ########################## |
rawfiles = '?' | raw_files: str = '?' |
| |
Once the path information and the name of the star cluster have been specified, the script can be executed with | Once the path information and the name of the star cluster have been specified, the script can be executed with |
from astroquery.vizier import Vizier | from astroquery.vizier import Vizier |
| |
from ost_photometry.analyze.analyze import main_extract | from ost_photometry.analyze.analyze import main_extract |
from ost_photometry.utilities import find_wcs_astrometry, calculate_field_of_view, Image | from ost_photometry.analyze.plots import starmap, scatter |
| from ost_photometry.utilities import ( |
| find_wcs_astrometry, |
| Image, |
| ) |
| from ost_photometry.analyze.utilities import ( |
| clear_duplicates, |
| ) |
| |
import warnings | import warnings |
Note: The variable names given here and in the following are only examples and can be replaced by any other name. | Note: The variable names given here and in the following are only examples and can be replaced by any other name. |
| |
Note: If the images are not in a subdirectory of the current directory, the path can also refer to the next higher level by means of ''../''. | Note: If the images are not in a subdirectory of the current directory, the path can also refer to the next higher level by using ''../''. |
| |
==== Reading in the images ==== | ==== Reading in the images ==== |
| |
We open the FIT files with image data by means of the ''image'' function provided by the OST library. This has the advantage that we do not have to worry about the details of the read-in process, and at the same time we have a //Python// object for each image, which we can use to store some of the results obtained in the following steps. The ''image'' function has the following arguments: 1. index of the image (can be set to ''0''), 2. filter name, 3. name of the object, 4. path to the image file and 5. path to the output directory: | We open the FIT files with image data by means of the ''image'' function provided by the OST library. This has the advantage that we do not have to worry about the details of the read-in process, and at the same time we have a //Python// object for each image, which we can use to store some of the results obtained in the following steps. The ''image'' function has the following arguments: 1. index of the image (can be set to ''0''), 2. filter name, 3. path to the image file and 4. path to the output directory: |
| |
# Load images | # Load images |
V_image = Image(0, 'V', name, V_path, out_path) | V_image = Image(0, 'V', V_path, out_path) |
B_image = Image(0, 'B', name, B_path, out_path) | B_image = Image(0, 'B', B_path, out_path) |
| |
==== World Coordinate System ==== | ==== World Coordinate System ==== |
| |
The images created by the OST are usually delivered without a so-called WCS. WCS stands for World Coordinate System and allows to assign sky coordinates to each pixel in the image. In //ds9// these coordinates will be displayed in the coordinates window of //ds9// when pointing with the mouse pointer on certain pixels or objects. This is very helpful if you want to compare the positions of stars in your own image with those in star catalogs. This could be quite helpful for the calibration of the star magnitudes later on ;-). | The images created by the OST are usually delivered without a so-called WCS. WCS stands for World Coordinate System and allows to assign sky coordinates to each pixel in the image. In //ds9// these coordinates will be displayed in the coordinates window of //ds9// when pointing with the mouse pointer on certain pixels or objects. This is very helpful if you want to compare the positions of stars in your own image with those in star catalogs. This could be quite helpful for the calibration of the stellar magnitudes later on ;-). |
| |
| /* |
However, for the determination of the WCS we are still missing a few prerequisites. The approximate central coordinates of the imaged sky section are already stored in the headers of the FIT files, but the exact field of view and the pixel scale of the created images still have to be determined. We achieve this with the function ''cal_fov''. We pass this function the ''image'' object created in the previous step as an argument. | However, for the determination of the WCS we are still missing a few prerequisites. The approximate central coordinates of the imaged sky section are already stored in the headers of the FIT files, but the exact field of view and the pixel scale of the created images still have to be determined. We achieve this with the function ''cal_fov''. We pass this function the ''image'' object created in the previous step as an argument. |
| |
The FOV information will be automatically stored in the ''image'' object. | The FOV information will be automatically stored in the ''image'' object. |
| |
Afterwards, we will use the function ''find_wcs_astrometry'' to determine the WCS: | Afterwards, we |
| */ |
| |
| We will use the function ''find_wcs_astrometry'' to determine the WCS: |
| |
# Find the WCS solution for the images | # Find the WCS solution for the images |
=== Finding the stars === | === Finding the stars === |
| |
The identification of the stars in the two images is performed using the ''main_extract'' function. This function takes as the first argument the ''image'' object. The second argument characterizes the size of the diffraction discs. This so called sigma can be determined from the images. But it is usually around ''3.0''. As an optional argument, the extraction method can be selected (''photometry''). Here we specify '''APER''', and thus select aperture photometry, where the flux of the individual objects and the associated sky backgrounds is read out within fixed apertures (here circular and ring-shaped, respectively). To specify these apertures, we have to give a radius for the circular object aperture (''rstars'') and two radii for the annular background aperture (''rbg_in'' and ''rbg_out''). In previous observations, the respective values were ''4'', ''7'', and ''10'', respectively. The radii are in arc seconds. | The identification of the stars in the two images is performed using the ''main_extract'' function. This function takes as the first argument the ''image'' object. As an optional argument, the extraction method can be selected (''photometry''). Here we specify '''APER''', and thus select aperture photometry, where the flux of the individual objects and the associated sky backgrounds is read out within fixed apertures (here circular and ring-shaped, respectively). To specify these apertures, we have to give a radius for the circular object aperture (''rstars'') and two radii for the annular background aperture (''rbg_in'' and ''rbg_out''). In previous observations, the respective values were ''4'', ''7'', and ''10'', respectively. The radii are in arc seconds. |
| |
# Extract objects | # Extract objects |
main_extract( | main_extract( |
V_image, | V_image, |
sigma, | |
photometry_extraction_method='APER', | photometry_extraction_method='APER', |
radius_aperture=4., | radius_aperture=4., |
main_extract( | main_extract( |
B_image, | B_image, |
sigma, | |
photometry_extraction_method='APER', | photometry_extraction_method='APER', |
radius_aperture=4., | radius_aperture=4., |
| |
# Correlate results from both images | # Correlate results from both images |
id_V, id_B, _, _ = matching.search_around_sky(coords_V, coords_B, 2.*u.arcsec) | id_V, id_B, d2, _ = matching.search_around_sky(coords_V, coords_B, 2.*u.arcsec) |
| |
The successfully mapped stars get one entry each in ''id_V'' and ''id_B''. These two lists (more precisely Numpy arrays) contain the index values that these stars had in the original unsorted datasets. This means that we can use these index values to sort the original tables with the fluxes and star positions in such a way that they only contain stars that were detected in both images and that the order of the stars in both data sets is the same. This assignment is essential for the further procedure. | The successfully mapped stars get one entry each in ''id_V'', ''id_B'', and ''d2''. These two first lists (more precisely Numpy arrays) contain the index values that these stars had in the original unsorted datasets. This means that we can use these index values to sort the original tables with the fluxes and star positions in such a way that they only contain stars that were detected in both images and that the order of the stars in both data sets is the same. This assignment is essential for the further procedure. |
| |
Sorting is done simply by inserting the arrays with the index values into the tables. We select and simultaneously sort the stars identified in both images: | Before this can happen, however, potential multiple identifications must be sorted out. For example, it is possible that ''matching.search_around_sky()'' assigns object 3 from ''coords_V'' to both object 2 and object 4 from ''coords_B''. These duplicates are removed with |
| |
| # Identify and remove duplicate indices |
| id_V, d2, id_B = clear_duplicates( |
| id_V, |
| d2, |
| id_B, |
| ) |
| id_B, _, id_V = clear_duplicates( |
| id_B, |
| d2, |
| id_V, |
| ) |
| |
| The photometric tables can then be sorted by inserting the index value arrays into the corresponding tables. In this way, we simultaneously select and sort the stars identified in the two images: |
| |
# Sort table with extraction results and SkyCoord object | # Sort table with extraction results and SkyCoord object |
In the next step we can perform the actual download. For this purpose we use the function ''.query_region''. We have to pass to it the coordinates and the size of the sky region to be queried. Fortunately, both are already known. We know the coordinates from the FIT headers of the star cluster images and the radius of the region is simply the field of view, which we already calculated above. Both values can be taken from the ''V_image'' object. | In the next step we can perform the actual download. For this purpose we use the function ''.query_region''. We have to pass to it the coordinates and the size of the sky region to be queried. Fortunately, both are already known. We know the coordinates from the FIT headers of the star cluster images and the radius of the region is simply the field of view, which we already calculated above. Both values can be taken from the ''V_image'' object. |
| |
calib_tbl = v.query_region(V_image.coord, radius=V_image.fov*u.arcmin)[0] | calib_tbl = v.query_region(V_image.V_image.coordinates_image_center, radius=V_image.field_of_view_x*u.arcmin)[0] |
| |
The table ''calib_tbl'' now comprise all objects contained in the **APASS** catalog that are in our field of view with their ''B'' and ''V'' magnitudes. | The table ''calib_tbl'' now comprise all objects contained in the **APASS** catalog that are in our field of view with their ''B'' and ''V'' magnitudes. |
coord_calib, | coord_calib, |
2.*u.arcsec, | 2.*u.arcsec, |
| ) |
| |
| As also described above, the duplicates must now be sorted out: |
| |
| # Identify and remove duplicate indexes |
| ind_fit, d2, ind_lit = clear_duplicates( |
| ind_fit, |
| d2, |
| ind_lit, |
| ) |
| ind_lit, _, ind_fit = clear_duplicates( |
| ind_lit, |
| d2, |
| ind_fit, |
) | ) |
| |
One way to check the validity of the calibration stars is to display them on a starmap (similar to what the ''main_extract'' above does automatically). But now we want to display the downloaded star positions as well as the stars that were actually used for the calibration later on. For this purpose the OST library offers a suitable function (''starmap'') which can create such plots. This function can be loaded via | One way to check the validity of the calibration stars is to display them on a starmap (similar to what the ''main_extract'' above does automatically). But now we want to display the downloaded star positions as well as the stars that were actually used for the calibration later on. For this purpose the OST library offers a suitable function (''starmap'') which can create such plots. This function can be loaded via |
| |
from ost.analyze.plot import starmap | from ost_photometry.analyze.plots import starmap |
| |
Since this function expects as input an astropy table, with the data to be plotted, we must first create it before we can plot the starmap. The position of the calibration stars are not yet available in pixel coordinates, because we got this information from the Simbad or Vizier database. Therefore, we need to generate these. At this point it is convenient that we have previously created a ''SkyCoord'' object for these stars. Using ''.to_pixel()'' and specifying the WCS of the image, we can easily generate pixel coordinates: | Since this function expects as input an astropy table, with the data to be plotted, we must first create it before we can plot the starmap. The position of the calibration stars are not yet available in pixel coordinates, because we got this information from the Simbad or Vizier database. Therefore, we need to generate these. At this point it is convenient that we have previously created a ''SkyCoord'' object for these stars. Using ''.to_pixel()'' and specifying the WCS of the image, we can easily generate pixel coordinates: |
label_2='Identified calibration stars', | label_2='Identified calibration stars', |
rts='calibration', | rts='calibration', |
nameobj=name, | |
) | ) |
| |
Here, the first argument is our output directory, the second argument is the actual image (as a //Numpy// array), the third argument is the filter label, the fourth argument is the first table, ''label'' is the label to the first dataset, ''tbl_2'' is the second table, ''label_2'' is the label to the second dataset, ''rts'' is a description of the plot, and ''nameobj'' is the name of the star cluster. | Here, the first argument is our output directory, the second argument is the actual image (as a //Numpy// array), the third argument is the filter label, the fourth argument is the first table, ''label'' is the label to the first dataset, ''tbl_2'' is the second table, ''label_2'' is the label to the second dataset, and ''rts'' is a description of the plot. |
| |
**Alternatively**, you can also create the starmap directly with the help of ''pyplot'' from the ''matplotlib'' module. This is not much more complex but offers more possibilities to customize the plot. You load ''pyplot'' by means of: | **Alternatively**, you can also create the starmap directly with the help of ''pyplot'' from the ''matplotlib'' module. This is not much more complex but offers more possibilities to customize the plot. You load ''pyplot'' by means of: |
Then the actual image can be loaded: | Then the actual image can be loaded: |
| |
plt.imshow(image, origin='lower') | plt.imshow(V_image, origin='lower') |
| |
''image'' is the actual image data and ''origin=lower'' makes sure that the overplotting of the pixel coordinates works. Afterwards the symbols which mark the star position can be plotted: | ''image'' is the actual image data and ''origin=lower'' makes sure that the overplotting of the pixel coordinates works. Afterwards the symbols which mark the star position can be plotted: |