en:praktikum:photometrie_python

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
en:praktikum:photometrie_python [2024/09/19 05:58] – [World Coordinate System] rhainichen:praktikum:photometrie_python [2025/04/01 11:16] (current) – [Defining some variables] rhainich
Line 56: Line 56:
    workon ost_photometry     workon ost_photometry 
  
-This is also necessary if you reconnect to a12, e.g. after a break, and want to continue the data analysis. +This is also necessary if you reconnect to columba, e.g. after a break, and want to continue the data analysis. 
  
 The OST photometry pipeline can then be installed in the terminal using //pip// as follows The OST photometry pipeline can then be installed in the terminal using //pip// as follows
Line 68: Line 68:
 In order to cope with a larger amount of data, there is a //Python// routine that performs the corrections for darkframe and flatfield per filter, then adds up the images per filter and aligns them to each other. The routine does not perform any quality control of the images, so unusable observations must be sorted out beforehand, otherwise alignment problems may occur. In order to cope with a larger amount of data, there is a //Python// routine that performs the corrections for darkframe and flatfield per filter, then adds up the images per filter and aligns them to each other. The routine does not perform any quality control of the images, so unusable observations must be sorted out beforehand, otherwise alignment problems may occur.
  
-Copy the //Python// script ''1_add_images.py'' from the directory ''~/scripts/n2/'' into your local working directory. After that you should open it with a text editor of your choice to adjust the paths for the images. To be able to read and verify a larger amount of images, the program expects a separation of the data into different subdirectories (variables: ''bias'', ''darks'', ''flats'', ''imgs''). There should be one directory each for the images of the star cluster, the flatfields, and the dark frames. A possible directory structure could be:+Copy the //Python// script ''1_add_images.py'' from the directory ''~/scripts/n2/'' into your local working directory. After that you should open it with a text editor of your choice to adjust the paths for the images. To be able to read and verify a larger amount of images, the program expects a separation of the data into different subdirectories (variables: ''bias'', ''darks'', ''flats'', ''images''). There should be one directory each for the images of the star cluster, the flatfields, and the dark frames. A possible directory structure could be:
      
   /bias/   /bias/
   /darks/   /darks/
   /flats/   /flats/
-  /imgs/+  /images/
      
-The //Python// script automatically detects the filters and exposure times used. Based on this, it arranges and classifies the files automatically without any further interaction. If you are sure that all ''FIT-Header'' keywords are set correctly, you can try to put all files into one directory. In this case only the path ''rawfiles'' must be set in the script. Otherwise, the paths to the subfolders for the flats, darks, etc. must be specified. Hence either ''bias'', ''darks'', ''flats'', and ''imgs'' musst be specified or only ''rawfiles''+The //Python// script automatically detects the filters and exposure times used. Based on this, it arranges and classifies the files automatically without any further interaction. If you are sure that all ''FIT-Header'' keywords are set correctly, you can try to put all files into one directory. In this case only the path ''rawfiles'' must be set in the script. Otherwise, the paths to the subfolders for the flats, darks, etc. must be specified. Hence either ''bias'', ''darks'', ''flats'', and ''images'' musst be specified or only ''raw_files''
  
 /* /*
Line 85: Line 85:
   ##########################  Individual folders  ############################   ##########################  Individual folders  ############################
   ### Path to the bias -- If set to '?', bias exposures are not used.   ### Path to the bias -- If set to '?', bias exposures are not used.
-  bias = '?'+  bias: str = '?'
      
   ### Path to the darks   ### Path to the darks
-  darks = '?'+  darks: str = '?'
      
   ### Path to the flats   ### Path to the flats
-  flats = '?'+  flats: str = '?'
      
   ### Path to the images   ### Path to the images
-  imgs  = '?'+  images: str = '?'
      
   #######################  Simple folder structure  ##########################   #######################  Simple folder structure  ##########################
-  rawfiles = '?'+  raw_files: str = '?'
  
 Once the path information and the name of the star cluster have been specified, the script can be executed with Once the path information and the name of the star cluster have been specified, the script can be executed with
Line 170: Line 170:
 Note: The variable names given here and in the following are only examples and can be replaced by any other name.   Note: The variable names given here and in the following are only examples and can be replaced by any other name.  
  
-Note: If the images are not in a subdirectory of the current directory, the path can also refer to the next higher level by means of ''../''.+Note: If the images are not in a subdirectory of the current directory, the path can also refer to the next higher level by using ''../''.
  
 ==== Reading in the images ==== ==== Reading in the images ====
Line 211: Line 211:
 === Finding the stars === === Finding the stars ===
  
-The identification of the stars in the two images is performed using the ''main_extract'' function. This function takes as the first argument the ''image'' object. The second argument characterizes the size of the diffraction discs. This so called sigma can be determined from the images. But it is usually around ''3.0''. As an optional argument, the extraction method can be selected (''photometry''). Here we specify '''APER''', and thus select aperture photometry, where the flux of the individual objects and the associated sky backgrounds is read out within fixed apertures (here circular and ring-shaped, respectively). To specify these apertures, we have to give a radius for the circular object aperture (''rstars'') and two radii for the annular background aperture (''rbg_in'' and ''rbg_out''). In previous observations, the respective values were ''4'', ''7'', and ''10'', respectively. The radii are in arc seconds. +The identification of the stars in the two images is performed using the ''main_extract'' function. This function takes as the first argument the ''image'' object. As an optional argument, the extraction method can be selected (''photometry''). Here we specify '''APER''', and thus select aperture photometry, where the flux of the individual objects and the associated sky backgrounds is read out within fixed apertures (here circular and ring-shaped, respectively). To specify these apertures, we have to give a radius for the circular object aperture (''rstars'') and two radii for the annular background aperture (''rbg_in'' and ''rbg_out''). In previous observations, the respective values were ''4'', ''7'', and ''10'', respectively. The radii are in arc seconds. 
  
    #   Extract objects    #   Extract objects
    main_extract(    main_extract(
        V_image,        V_image,
-       sigma, 
        photometry_extraction_method='APER',        photometry_extraction_method='APER',
        radius_aperture=4.,        radius_aperture=4.,
Line 224: Line 223:
    main_extract(    main_extract(
        B_image,        B_image,
-       sigma, 
        photometry_extraction_method='APER',        photometry_extraction_method='APER',
        radius_aperture=4.,        radius_aperture=4.,
Line 264: Line 262:
  
    #   Correlate results from both images    #   Correlate results from both images
-   id_V, id_B, _, _ = matching.search_around_sky(coords_V, coords_B, 2.*u.arcsec)+   id_V, id_B, d2, _ = matching.search_around_sky(coords_V, coords_B, 2.*u.arcsec)
  
-The successfully mapped stars get one entry each in ''id_V'' and ''id_B''. These two lists (more precisely Numpy arrays) contain the index values that these stars had in the original unsorted datasets. This means that we can use these index values to sort the original tables with the fluxes and star positions in such a way that they only contain stars that were detected in both images and that the order of the stars in both data sets is the same. This assignment is essential for the further procedure. +The successfully mapped stars get one entry each in ''id_V''''id_B'', and ''d2''. These two first lists (more precisely Numpy arrays) contain the index values that these stars had in the original unsorted datasets. This means that we can use these index values to sort the original tables with the fluxes and star positions in such a way that they only contain stars that were detected in both images and that the order of the stars in both data sets is the same. This assignment is essential for the further procedure. 
  
-Sorting is done simply by inserting the arrays with the index values into the tables. We select and simultaneously sort the stars identified in both images:+Before this can happen, however, potential multiple identifications must be sorted out. For example, it is possible that ''matching.search_around_sky()'' assigns object 3 from ''coords_V'' to both object 2 and object 4 from ''coords_B''. These duplicates are removed with 
 + 
 +   #   Identify and remove duplicate indices 
 +       id_V, d2, id_B = clear_duplicates( 
 +           id_V, 
 +           d2, 
 +           id_B, 
 +       ) 
 +       id_B, _, id_V = clear_duplicates( 
 +           id_B, 
 +           d2, 
 +           id_V, 
 +       ) 
 + 
 +The photometric tables can then be sorted by inserting the index value arrays into the corresponding tables. In this way, we simultaneously select and sort the stars identified in the two images:
  
    #   Sort table with extraction results and SkyCoord object    #   Sort table with extraction results and SkyCoord object
Line 307: Line 319:
 In the next step we can perform the actual download. For this purpose we use the function ''.query_region''. We have to pass to it the coordinates and the size of the sky region to be queried. Fortunately, both are already known. We know the coordinates from the FIT headers of the star cluster images and the radius of the region is simply the field of view, which we already calculated above. Both values can be taken from the ''V_image'' object. In the next step we can perform the actual download. For this purpose we use the function ''.query_region''. We have to pass to it the coordinates and the size of the sky region to be queried. Fortunately, both are already known. We know the coordinates from the FIT headers of the star cluster images and the radius of the region is simply the field of view, which we already calculated above. Both values can be taken from the ''V_image'' object.
  
-   calib_tbl = v.query_region(V_image.coord, radius=V_image.fov*u.arcmin)[0]+   calib_tbl = v.query_region(V_image.V_image.coordinates_image_center, radius=V_image.field_of_view_x*u.arcmin)[0]
  
 The table ''calib_tbl'' now comprise all objects contained in the **APASS** catalog that are in our field of view with their ''B'' and ''V'' magnitudes. The table ''calib_tbl'' now comprise all objects contained in the **APASS** catalog that are in our field of view with their ''B'' and ''V'' magnitudes.
Line 338: Line 350:
        coord_calib,        coord_calib,
        2.*u.arcsec,        2.*u.arcsec,
 +       )
 +
 +As also described above, the duplicates must now be sorted out: 
 +
 +   #   Identify and remove duplicate indexes
 +       ind_fit, d2, ind_lit = clear_duplicates(
 +           ind_fit,
 +           d2,
 +           ind_lit,
 +       )
 +       ind_lit, _, ind_fit = clear_duplicates(
 +           ind_lit,
 +           d2,
 +           ind_fit,
        )        )
  
Line 366: Line 392:
 One way to check the validity of the calibration stars is to display them on a starmap (similar to what the ''main_extract'' above does automatically). But now we want to display the downloaded star positions as well as the stars that were actually used for the calibration later on. For this purpose the OST library offers a suitable function (''starmap'') which can create such plots. This function can be loaded via  One way to check the validity of the calibration stars is to display them on a starmap (similar to what the ''main_extract'' above does automatically). But now we want to display the downloaded star positions as well as the stars that were actually used for the calibration later on. For this purpose the OST library offers a suitable function (''starmap'') which can create such plots. This function can be loaded via 
  
-   from ost.analyze.plot import starmap+   from ost_photometry.analyze.plots import starmap
  
 Since this function expects as input an astropy table, with the data to be plotted, we must first create it before we can plot the starmap. The position of the calibration stars are not yet available in pixel coordinates, because we got this information from the Simbad or Vizier database. Therefore, we need to generate these. At this point it is convenient that we have previously created a ''SkyCoord'' object for these stars. Using ''.to_pixel()'' and specifying the WCS of the image, we can easily generate pixel coordinates: Since this function expects as input an astropy table, with the data to be plotted, we must first create it before we can plot the starmap. The position of the calibration stars are not yet available in pixel coordinates, because we got this information from the Simbad or Vizier database. Therefore, we need to generate these. At this point it is convenient that we have previously created a ''SkyCoord'' object for these stars. Using ''.to_pixel()'' and specifying the WCS of the image, we can easily generate pixel coordinates:
Line 400: Line 426:
        label_2='Identified calibration stars',        label_2='Identified calibration stars',
        rts='calibration',        rts='calibration',
-       nameobj=name, 
     )     )
  
-Here, the first argument is our output directory, the second argument is the actual image (as a //Numpy// array), the third argument is the filter label, the fourth argument is the first table, ''label'' is the label to the first dataset, ''tbl_2'' is the second table, ''label_2'' is the label to the second dataset, ''rts'' is a description of the plot, and ''nameobj'' is the name of the star cluster.+Here, the first argument is our output directory, the second argument is the actual image (as a //Numpy// array), the third argument is the filter label, the fourth argument is the first table, ''label'' is the label to the first dataset, ''tbl_2'' is the second table, ''label_2'' is the label to the second dataset, and ''rts'' is a description of the plot.
  
 **Alternatively**, you can also create the starmap directly with the help of ''pyplot'' from the ''matplotlib'' module. This is not much more complex but offers more possibilities to customize the plot. You load ''pyplot'' by means of: **Alternatively**, you can also create the starmap directly with the help of ''pyplot'' from the ''matplotlib'' module. This is not much more complex but offers more possibilities to customize the plot. You load ''pyplot'' by means of:
Line 415: Line 440:
 Then the actual image can be loaded: Then the actual image can be loaded:
  
-   plt.imshow(image, origin='lower')+   plt.imshow(V_image, origin='lower')
  
 ''image'' is the actual image data and ''origin=lower'' makes sure that the overplotting of the pixel coordinates works. Afterwards the symbols which mark the star position can be plotted: ''image'' is the actual image data and ''origin=lower'' makes sure that the overplotting of the pixel coordinates works. Afterwards the symbols which mark the star position can be plotted:
  • en/praktikum/photometrie_python.1726725510.txt.gz
  • Last modified: 2024/09/19 05:58
  • by rhainich