pymusepipe package

Submodules

pymusepipe.align_pipe module

MUSE-PHANGS alignement module. This module can be used to align MUSE reconstructed images either with each others or using a reference background image. It spits the results out in a Fits table which can then be used to process and mosaic Muse PIXTABLES via the MUSE ESO pipeline. It includes a normalisation factor, an estimate of the background, as well as any potential rotation. Fine tuning can be done by hand by the user, using a set of reference plots.

class pymusepipe.align_pipe.AlignMuseDataset(name_reference, folder_reference=None, folder_muse_images=None, name_muse_images=None, sel_indices_images=None, median_window=10, subim_window=10, dynamic_range=10, border=10, hdu_ext=(0, 1), chunk_size=15, firstguess='crosscorr', **kwargs)[source]

Bases: object

Class to align MUSE images onto a reference image.

apply_extra_offset_ima(nima=0, extra_pixel=None, extra_arcsec=None, extra_rotation=None, **kwargs)[source]

Shift image with index nima with the total offset after adding any extra given offset This does not return anything but could in principle if using the output of the self.shift

Input

nima: int

Index of image to consider

extra_pixel: list of 2 floats

Extra offsets (x,y) in pixels. If None, nothing is applied

extra_arcsec: list of 2 floats

Extra offsets (x,y) in arcsec if extra_pixel is not provided If None, nothing is applied

extra_rotation: float

Rotation in degrees. If None, no new extra offset is applied

apply_optical_flow_offset_ima(nima=0)[source]

Transfer the value of the optical flow into the extra pixel

apply_optical_flow_offset_listima(list_nima=None)[source]

Apply the optical flow offset as extra pixels offsets and rotation

Input

list_nima: list

If None, will be initiliased to the default list of indices

compare(data1, data2, header=None, start_nfig=1, nlevels=10, levels=None, convolve_data1=0.0, convolve_data2=0.0, showcontours=True, showcuts=True, shownormalise=True, showdiff=True, normalise=True, median_filter=True, ncuts=5, percentage=5.0, suffix_fig='', **kwargs)[source]

Compare the projected reference and MUSE image by plotting the contours, the difference and vertical/horizontal cuts.

Parameters:
  • data1

  • data2 (2d np.arrays) – Array to compare

  • header (Header) – If provided, will be use to get the WCS in the plots. Default is None (ignored).

  • polypar (ODR result) – If None, it will be recalculated

  • showcontours (bool [True]) –

  • showcuts (bool [True]) –

  • shownormalise (bool [True]) –

  • showdiff (bool [True]) – All options corresponding to 1 specific plot. By default show them all (all True)

  • ncuts (int [5]) – Number of vertical / horizontal cuts along the ratio between the 2 maps to be shown (“cuts”)

  • percentage (float [5]) – Used to compute which percentile to show

  • start_nfig (int [1]) – Number of the matplotlib Figure to start with

  • nlevels (int [10]) – Number of levels for the contour plots

  • levels (list of float [None]) – Specific list of levels if any (default is None)

  • convolve_data1 (float [0]) – If not 0, will convolve with a gaussian of that sigma

  • convolve_data2 (float [0]) – If not 0, will convolve the reference image with a gaussian of that sigma

  • (bool) (savefig) – If True, will save the figure into a png

  • suffix_fig (str) – Suffix name to add to the figure filenames

  • figures (Makes a maximum of 4) –

compare_ima(nima=0, nima_museref=None, convolve_muse=0, convolve_reference=0.0, **kwargs)[source]

Input

nima: int

Index of input image

nima_museref: int

Index of second input image for the reference. Default is None, hence ignored and the default reference image will be used.

Create

Plots which compare the two input datasets as defined by the indices

find_cross_peak(muse_hdu, rotation=0.0, minflux=None, **kwargs)[source]

Aligns the MUSE HDU to a reference HDU

Input

muse_hdu: astropy.io.fits hdu

MUSE hdu file

name_musehdr: str

name of the muse hdr to save

rotation: float

Angle in degrees (0).

minflux: minimum flux to be used in the cross-correlation

Flux below that value will be set to 0. Default is 0.

returns:
  • xpix_cross

  • ypix_cross (x and y pixel coordinates of the cross-correlation peak)

find_cross_peak_ima(nima=0, minflux=None)[source]

Find the cross correlation peak and get the x and y shifts for a given image, given its index nima

Input

nima: int

Index of the image

minflux: float

Minimum flux for the cross-correlation

find_cross_peak_listima(list_nima=None, minflux=None)[source]

Run the cross correlation peaks on all MUSE images Derive the self.cross_off_pixel/arcsec parameters

Input

list_nima: list

list of indices for images to process Should be a list. Default is None and all images are processed

minflux: float [None]

minimum flux to be used in the cross-correlation Flux below that value will be set to 0.

get_imaref_muse(muse_hdu, rotation=0.0, minflux=0.0, **kwargs)[source]

Returns the ref image and muse images on the same grid assuming a given rotation

Input

muse_hdu: HDU

MUSE hdu file

name_musehdr: str

name of the muse hdr to save

rotation: float

Angle in degrees (0).

minflux: float

Minimum flux to prepare the image (0).

returns:
  • ima_ref, ima_muse (arrays) – Reprojected images

  • Note that the original images are saved in self._temp_input_origmuse and

  • self._temp_input_origref when debug mode is on (self._debug)

get_normfactor_ima(nima=0, median_filter=True, border=0, convolve_muse=0.0, convolve_reference=0.0, chunk_size=10)[source]

Get the normalisation factor for shifted and projected images. This function only consider the input image given by index nima and the reference image (after projection).

Input

nima: int

Index of image to consider

median_filter: bool

If True, will median filter

convolve_muse: float [0]

Will convolve the image with index nima with a gaussian with that sigma. 0 means no convolution

convolve_reference: float [0]

Will convolve the reference image with a gaussian with that sigma. 0 means no convolution

border: int

Number of pixels to crop

threshold_muse: float [None]

Threshold for the input image flux to consider

chunk_size: int

Size of chunks to consider for chunk statistics (polynomial normalisation)

returns:
  • data (2d array)

  • refdata (2d array) – The 2 arrays (input, reference) after processing

get_shift_from_pcc(muse_hdu, rotation=0.0, minflux=0.0, verbose=False, **kwargs)[source]

Find a guess translation using PCC

Input

muse_hdu: HDU

MUSE hdu file

rotation: float

Angle in degrees (0).

minflux: float

Minimum flux to prepare the image (0).

name_musehdr: str

Name of the muse hdr to save. Optional. Only operational if self.save_hdr is True

returns:
  • xpix_pcc

  • ypix_pcc x and y pixel coordinates of the cross-correlation peak

get_shift_from_pcc_ima(nima=None, minflux=None, rotation=None, verbose=False)[source]

Run the PCC shift guess for image nima

Input

nima: int

Index of image

minflux: float [None]

minimum flux to be used in the cross-correlation Flux below that value will be set to 0.

rotation: float

If None, will take the init_rotangle. Otherwise it will take the input value

get_shift_from_pcc_listima(list_nima=None, minflux=None, verbose=False)[source]

Run the PCC shift guess on a list of images given by a list of indices

Input

list_nima: list of indices for images to process

Should be a list. Default is None and all images are processed

minflux: float [None]

minimum flux to be used in the cross-correlation Flux below that value will be set to 0.

init_guess_offset(**kwargs)[source]

Initialise first guess, either from cross-correlation (default) or from an Offset FITS Table

Input

firstguess: str

If “crosscorr” uses cross-correlation to get the first guess of the offsets. If “fits” uses the input fits table.

init_optical_flow(muse_hdu, rotation=0.0, minflux=None, guess_translation=[0.0, 0.0], header=None, verbose=False, **kwargs)[source]

Get the optical flow for this hdu

Input

muse_hdu: HDU

Muse HDU input

rotation: float

Input rotation

minflux: float

Minimum flux to consider in the image

name_musehdr: str

Name of hdr in case those are saved (self.save_hdr is True)

init_optical_flow_ima(nima=0, minflux=None, force_pcc=False, guess_offset=(0.0, 0.0), verbose=False, provide_header=True)[source]

Initialise the optical flow using the current image with index nima

Input

nima: int

Index of image

minflux: float

Minimum flux to consider

init_optical_flow_listima(list_nima=None, **kwargs)[source]

Initialise the optical flow on a list of images given by a list of indices

Input

list_nima: list of indices for images to process

Should be a list. Default is None and all images are processed

iterate_on_optical_flow_ima(nima=0, niter=5, verbose=False, use_rotation=True, **kwargs)[source]

Iterate solution using the optical flow guess

Input

nima: int

Index of image to consider

niter: int

Number of iterations

iterate_on_optical_flow_listima(list_nima=None, use_rotation=True, **kwargs)[source]

Run the iteration for the optical flow on a list of images given by a list of indices

Input

list_nima: list of indices for images to process

Should be a list. Default is None and all images are processed

niter: int

Number of iterations. Optional. If not provided, will use the default in self.iterate_on_optical_flow_ima

offset_and_compare(nima=0, extra_pixel=None, extra_arcsec=None, extra_rotation=None, **kwargs)[source]

Run the offset and comparison for a given image number

Input

nima: int

Index of the image to consider

extra_pixel: list of 2 floats

Offsets in X and Y in pixels to add to the existing guessed offsets IMPORTANT NOTE: extra_pixel will be considered first (before extra_arcsec).

extra_arcsec: list of 2 floats

Offsets in X and Y in arcsec to add to the existing guessed offsets. Ignored if extra_pixel is given or None

extra_rotation: rotation in degrees

Angle to rotate the image (in degrees). Ignore if None

threshold_muse: float [0]

Threshold to consider when plotting the comparison

Additional arguments

plot (bool): if True, will plot the comparison
If not used, will use the default self.plot
  • flux comparison (1 to 1)

  • Map of the flux ratio

  • Contours of the two scaled maps

  • Cuts of the division between the 2 maps

See also all arguments from self.compare

open_hdu()[source]

Open the HDU of the MUSE and reference images

open_offset_table(name_table=None)[source]

Read offset table from fits file

Input

name_table: str

Name of the input OFFSET table

returns:
  • status (None if no table name is given, False if file does not) – exist, True if it does

  • Table (the result of a astropy.Table.read of the fits table)

print_images_names()[source]

Print out the names of the images being considered for alignment

print_offsets_and_norms(filename='_temp.txt', folder_output_file=None, overwrite=True)[source]

Save all offsets and norms into filename. By default, file will be overwritten.

Input

filename: str

Name of file where the output will be written

folder_output_file: str

Name of output folder where the file will be written

overwrite: bool

Default is True

Creates

Ascii file named via the filename input argument

run_optical_flow(list_nima=None, save_plot=True, use_rotation=True, verbose=False, **kwargs)[source]

Run Optical flow, first with a guess offset and then iterating. The solution is saved as extra offset in the class, and a op_plot instance is created. If save_plot is True, it will save a set of default plots

Input

list_nima: list

List of indices. If None, will use the default list of all images

save_plotbool

Whether to save the optical flow diagnostic plots or not.

use_rotation: bool

True if you wish to have rotation. False otherwise

verbose: bool

run_optical_flow_ima(nima=0, save_plot=True, use_rotation=True, verbose=False, **kwargs)[source]

Run Optical flow on image with index nima, first with a guess offset and then iterating. The solution is saved as extra offset in the class, and a op_plot instance is created. If save_plot is True, it will save a set of default plots

Input

nima: int

Image index.

save_plotbool

Whether to save the optical flow diagnostic plots or not.

save_fits_offset_table(name_output_table=None, folder_output_table=None, overwrite=False, suffix='', save_flux_scale=True, save_other_params=True)[source]

Save the Offsets into a fits Table

Input

folder_output_table: str [None]

Folder of the output table. If None (default) the folder for the input offset table will be used or alternatively the folder of the MUSE images.

name_output_table: str [None]

Name of the output fits table. If None (default) it will use the one given in self.name_output_table

overwrite: bool [False]

If True, overwrite if the file exists

suffix: str [“”]

Suffix to be used to add to the input name. This is handy to just modify the given fits name with a suffix (e.g., version number).

save_flux_scale: bool [True]

If True, saving the flux in FLUX_SCALE If False, do not save the flux conversion

save_other_params: bool [True]

If True, saving the background + rotation If False, do not save these 2 parameters.

Creates

A fits table with the given name (using the suffix if any)

save_image(newfits_name=None, nima=0)[source]

Save the newly determined hdu

Input

newfits_name: str

Name of the fits file to be used

nima: int [0]

Index of the image to save

Creates

A new fits file

save_polypar_ima(nima=0, beta=None)[source]

Saving the input values into the fixed arrays for the polynomial

Input

beta: list/array of 2 floats

show_linearfit_values()[source]

Print some information about the linearly fitted parameters pertaining to the normalisation.

show_norm_factors()[source]

Print some information about the normalisation factors.

show_offset_fromfits(name_table=None)[source]

Print offset table from fits file

Input

name_table: str

Name of the input OFFSET table

show_offsets()[source]

Print out the offset from the Alignment class

pymusepipe.align_pipe.arcsec_to_pixel(hdu, xy_arcsec=(0.0, 0.0))[source]

Transform from arcsec to pixel for the muse image using the hdu to extract the WCS, hence the scaling.

Input

hdu: astropy hdu (fits)

Input hdu which includes a WCS

xy_arcsec: list of 2 floats ([0,0])

Coordinates to transform from arcsec to pixel.

returns:
  • xpix, ypix (tuple or list of 2 floats) – Pixel coordinates

  • See also (pixel_to_arcsec (align_pipe.py))

pymusepipe.align_pipe.create_offset_table(image_names, table_folder='', table_name='dummy_offset_table.fits', overwrite=False)[source]

Create an offset list table from a given set of images. It will use the MJD and DATE as read from the descriptors of the images. The names for these keywords is stored in the dictionary default_offset_table from config_pipe.py

Parameters:
  • image_names (list of str) – List of image names to be considered. (Default value = [])

  • table_folder (str) – folder of the table (Default value = “”)

  • table_name (str) – name of the table to save [‘dummy_offset_table.fits’] (Default value = “dummy_offset_table.fits”)

  • overwrite (bool) – if the table exists, it will be overwritten if set to True only. (Default value = False)

  • overwrite – if the table exists, it will be overwritten if set to True only. (Default value = False)

Return type:

A fits table with the output given name. (Default value = False)

pymusepipe.align_pipe.get_conversion_factor(input_unit, output_unit, filter_name='WFI')[source]

Conversion of units from an input one to an output one

Input

input_unit: astropy unit

Input astropy unit to analyse

output_unit: astropy unit

Astropy unit to compare to input unit.

equivalencies: astropy equivalency

Used in case there is an existing equivalency to help the conversion

returns:

conversion

rtype:

astropy unit conversion

pymusepipe.align_pipe.init_plot_optical_flow(opflow)[source]

Initialise the optical flow plot using the AlignmentPlotting

Input

opflow: optical flow instance (see spacepylot)

rtype:

An optical flow plot instance

pymusepipe.align_pipe.is_sequence(arg)[source]

Test if sequence and return the boolean result

Parameters:

arg (input argument) –

Returns:

result

Return type:

boolean

pymusepipe.align_pipe.pixel_to_arcsec(hdu, xy_pixel=(0.0, 0.0))[source]

Transform from arcsec to pixel for the muse image using the hdu to extract the WCS, hence the scaling.

Input

hdu: astropy hdu (fits)

Input hdu which includes a WCS

xy_pixel: tuple or list of 2 floats ((0,0))

Coordinates to transform from pixel to arcsec

returns:
  • xarc, yarc (2 floats) – Arcseconds coordinates

  • See also (arcsec_to_pixel (align_pipe.py))

pymusepipe.align_pipe.rotate_pixtable(folder='', name_suffix='', nifu=1, angle=0.0, **kwargs)[source]

Rotate a single IFU PIXTABLE_OBJECT Will thus update the HIERARCH ESO INS DROT POSANG keyword.

Input

folder: str

name of the folder where the PIXTABLE are

name_suffix: str

name of the suffix to be used on top of PIXTABLE_OBJECT

nifu: int

Pixtable number. Default is 1

angle: float

Angle to rotate (in degrees)

pymusepipe.align_pipe.rotate_pixtables(folder='', name_suffix='', list_ifu=None, angle=0.0, **kwargs)[source]

Will update the derotator angle in each of the 24 pixtables Using a loop on rotate_pixtable

Will thus update the HIERARCH ESO INS DROT POSANG keyword.

Input

folder: str

name of the folder where the PIXTABLE are

name_suffix: str

name of the suffix to be used on top of PIXTABLE_OBJECT

list_ifu: list[int]

List of Pixtable numbers. If None, will do all 24

angle: float

Angle to rotate (in degrees)

pymusepipe.check_pipe module

MUSE-PHANGS check pipeline module

class pymusepipe.check_pipe.CheckPipe(mycube='DATACUBE_FINAL.fits', pdf_name='check_pipe.pdf', pipe=None, standard_set=True, **kwargs)[source]

Bases: MusePipe

Checking the outcome of the data reduction

check_given_images(suffix=None)[source]

Check all images with given suffix

check_master_bias_flat()[source]

Checking the Master bias and Master flat

check_quadrants()[source]

Checking spectra from the 4 quadrants

check_sky_spectra(suffix)[source]

Check all sky spectra from the exposures

check_white_line_images(line='Ha', velocity=0.0)[source]

Building the White and Ha images and Adding them on the page

pymusepipe.check_pipe.print_plot(text)[source]

pymusepipe.combine module

MUSE-PHANGS combine module

class pymusepipe.combine.MusePointings(targetname=None, list_datasets=None, list_pointings=None, dict_exposures=None, prefix_masked_pixtables='tmask', folder_config='', rc_filename=None, cal_filename=None, combined_folder_name='Combined', suffix='', name_offset_table=None, folder_offset_table=None, log_filename='MusePipeCombine.log', verbose=True, debug=False, **kwargs)[source]

Bases: SofPipe, PipeRecipes

Class for a set of MUSE Pointings which can be covering several datasets. This provides a set of rules and methods to access the data and process them.

create_all_pointings_wcs(filter_list='white', list_pointings=None, **kwargs)[source]

Create all pointing masks one by one as well as the wcs for each individual pointings. Using the grid from the global WCS of the mosaic but restricting it to the range of non-NaN. Hence this needs a global WCS mosaic as a reference to work.

Input

filter_list = list of str

List of filter names to be used.

create_combined_wcs(refcube_name=None, lambdaminmax_wcs=[6800, 6805], **kwargs)[source]

Create the reference WCS from the full mosaic with a given range of lambda.

Input

refcube_name: str

Name of the cube. Can be None, and then the final datacube from the combine folder will be used.

wave1: float - optional

Wavelength taken for the extraction. Should only be present in all spaxels you wish to get.

prefix_wcs: str - optional

Prefix to be added to the name of the input cube. By default, will use “refwcs”.

add_targetname: bool [True]

Add the name of the target to the name of the output WCS reference cube. Default is True.

create_pointing_wcs(pointing, lambdaminmax_mosaic=[4700, 9400], filter_list='white', **kwargs)[source]

Create the mask of a given pointing And also a WCS file which can then be used to compute individual pointings with a fixed WCS.

Input

pointing: int

Number of the pointing

filter_list = list of str

List of filter names to be used.

Creates:

pointing mask WCS cube

returns:

Name of the created WCS cube

create_reference_wcs(pointings_wcs=True, mosaic_wcs=True, reference_cube=True, refcube_name=None, **kwargs)[source]

Create the WCS reference files, for all individual pointings and for the mosaic.

pointings_wcs: bool [True]

Will run the individual pointings WCS

mosaic_wcs: bool [True]

Will run the combined WCS

lambdaminmax: [float, float]

extract_combined_narrow_wcs(name_cube=None, **kwargs)[source]

Create the reference WCS from the full mosaic with only 2 lambdas

Input

name_cube: str

Name of the cube. Can be None, and then the final datacube from the combine folder will be used.

wave1: float - optional

Wavelength taken for the extraction. Should only be present in all spaxels you wish to get.

prefix_wcs: str - optional

Prefix to be added to the name of the input cube. By default, will use “refwcs”.

add_targetname: bool [True]

Add the name of the target to the name of the output WCS reference cube. Default is True.

Creates:

Combined narrow band WCS cube

returns:

name of the created cube

property full_list_datasets
goto_folder(newpath, addtolog=False, verbose=True)[source]

Changing directory and keeping memory of the old working one

goto_origfolder(addtolog=False)[source]

Go back to original folder

goto_prevfolder(addtolog=False)[source]

Go back to previous folder

run_combine(sof_filename='pointings_combine', lambdaminmax=[4000.0, 10000.0], list_pointings=None, suffix='', **kwargs)[source]

MUSE Exp_combine treatment of the reduced pixtables Will run the esorex muse_exp_combine routine

Parameters:
  • sof_filename (string (without the file extension)) – Name of the SOF file which will contain the Bias frames

  • lambdaminmax (list of 2 floats) – Minimum and maximum lambda values to consider for the combine

  • suffix (str) – Suffix to be used for the output name

run_combine_all_single_pointings(add_suffix='', sof_filename='pointings_combine', list_pointings=None, **kwargs)[source]

Run for all pointings individually, provided in the list of pointings, by just looping over the pointings.

Input

list_pointings: list of int

By default to None (using the default self.list_pointings). Otherwise a list of pointings you wish to conduct a combine but for each individual pointing.

add_suffix: str

Additional suffix. ‘PXX’ where XX is the pointing number will be automatically added to that add_suffix for each individual pointing.

sof_filename: str

Name (suffix only) of the sof file for this combine. By default, it is set to ‘pointings_combine’.

lambdaminmax: list of 2 floats [in Angstroems]

Minimum and maximum lambda values to consider for the combine. Default is 4000 and 10000 for the lower and upper limits, resp.

run_combine_single_pointing(pointing, add_suffix='', sof_filename='pointing_combine', **kwargs)[source]

Running the combine routine on just one single pointing

Input

pointing: int

Pointing number. No default: must be provided.

add_suffix: str

Additional suffix. ‘PXX’ where XX is the pointing number will be automatically added to that add_suffix.

sof_filename: str

Name (suffix only) of the sof file for this combine. By default, it is set to ‘pointings_combine’.

lambdaminmax: list of 2 floats [in Angstroems]

Minimum and maximum lambda values to consider for the combine. Default is 4000 and 10000 for the lower and upper limits, resp.

wcs_from_mosaic: bool

True by default, meaning that the WCS of the mosaic will be used. If not there, will ignore it.

set_fullpath_names()[source]

Create full path names to be used That includes: root, data, target, but also _dict_paths, paths

pymusepipe.combine.build_dict_exposures(target_path='', str_dataset='OB', ndigits=3, show_pointings=False)[source]

Build a dictionary of exposures using the list of datasets found for the given dataset path

Parameters:
  • target_path (str) – Path of the target data

  • str_dataset (str) – Prefix string for datasets

  • ndigits (int) – Number of digits to format the name of the dataset

Returns:

dict_expo – Dictionary of exposures in each dataset

Return type:

dict

pymusepipe.combine.get_list_datasets(target_path='', str_dataset='OB', ndigits=3)[source]

Getting the list of existing datasets for a given target path

Input

target_path: str

Path of the target data

str_dataset: str

Prefix string for datasets

ndigits: int

Number of digits to format the name of the dataset

returns:

list_datasets

rtype:

list of int

pymusepipe.combine.get_list_exposures(dataset_path='')[source]

Getting a list of exposures from a given path

Input

dataset_path: str

Folder name where the dataset is

returns:

list_expos

rtype:

list of int

pymusepipe.combine.get_list_periods(root_path='')[source]

Getting a list of existing periods for a given path

Input

path: str

returns:

list_targets

rtype:

list of str

pymusepipe.combine.get_list_reduced_pixtables(target_path='', list_datasets=None, suffix='', str_dataset='OB', ndigits=3)[source]

Provide a list of reduced pixtables

Input

target_path: str

Path for the target folder

list_datasets: list of int

List of integers, providing the list of datasets to consider

suffix: str

Additional suffix, if needed, for the names of the PixTables.

pymusepipe.combine.get_list_targets(folder='')[source]

Getting a list of existing periods for a given path

Input

folder: str

Folder name where the targets are

returns:

list_targets

rtype:

list of str

pymusepipe.config_pipe module

MUSE-PHANGS configuration module

pymusepipe.config_pipe.get_suffix_product(expotype)[source]

pymusepipe.create_sof module

MUSE-PHANGS creating sof file module

class pymusepipe.create_sof.SofDict[source]

Bases: OrderedDict

New Dictionary for the SOF writing Inheriting from ordered Dictionary

class pymusepipe.create_sof.SofPipe[source]

Bases: object

SofPipe class containing all the SOF writing modules

write_sof(sof_filename, new=False, verbose=None)[source]

Feeding an sof file with input filenames from a dictionary

pymusepipe.cube_convolve module

MUSE-PHANGS convolve module

pymusepipe.cube_convolve.convolution_kernel(input_psf, target_psf, scale=0.2)[source]

Create the 3D convolution kernel starting from a 3D model of the original PSF and a 2D model of the target PSF using pypher.

Parameters

input_psf (np.ndarray): 3D array with the model of the original PSF target_psf (np.ndarray): 2D array with a model of the target PSF scale (float): spatial scale of both PSF in arcsec/pix

Returns
conv_kernel (np.ndarray): 3D array with a convolution kernel

that varies as a function of wavelength.

pymusepipe.cube_convolve.convolution_kernel_gaussian(fwhm_wave, target_fwhm, target_psf, scale=0.2)[source]

Create the 3D convolution kernel starting from a 3D model of the original PSF and a 2D model of the target PSF using both gaussian functions.

Parameters:
  • fwhm_wave (array) – FWHM of the original PSF as a function of wavelength

  • target_fwhm (float) – fwhm of the target PSF

  • target_psf (np.ndarray) – target psf2d

  • scale (float) – spatial scale of both PSF in arcsec/pix

Returns:

np.ndarray

3D array with a convolution kernel that varies as a function of wavelength.

Return type:

conv_kernel

pymusepipe.cube_convolve.cube_convolve(data, kernel, variance=None, fft=True, fill_value=nan)[source]

Convolve a 3D datacube

Parameters:
  • datacube

  • kernel

Returns:

the convolved 3D data and its variance

pymusepipe.cube_convolve.cube_kernel(shape, wave, input_fwhm, target_fwhm, input_function, target_function, lambda0=6483.58, input_nmoffat=None, target_nmoffat=None, b=-3e-05, scale=0.2, compute_kernel='pypher')[source]

Main function to create the convolution kernel for the datacube

Parameters:
  • shape (array) – the shape of the datacube that is going to be convolved. It must be in the form (z, y, x).

  • wave (float array) – wavelengths for the datacube

  • target_fwhm (float) – fwhm of the target PSF.

  • input_fwhm (float) – fwhm of the original PSF at the reference wavelength lambda0

  • input_function (str) – function to be used to describe the input PSF

  • target_function (str) – function to be used to describe the target PSF

  • lambda0 – float, optional the wavelength at which the original_fwhm has been measured. Default: 6483.58 (central wavelenght of WFI_BB filter)

  • input_nmoffat (float) – power index of the original PSF if Moffat [None]

  • target_nmoffat (float) – power index for the target PSF if Moffat [None]

  • b (float) – steepness of the fwhm vs wavelength relation. Default: -3e-5

  • step (float) – wavelength dispersion in AA/px

  • scale (float) – spatial pixel scale of the PSFs in arcsec/pix

  • compute_kernel (str) – method to compute the convolution kernel. It can be ‘pypher’ or ‘gaussian’

Returns:

np.ndarray

3D array to be used in the convolution

Return type:

Kernel

pymusepipe.cube_convolve.gaussian_kernel(fwhm, size, scale=0.2, **kwargs)[source]

Gaussian kernel. Input:

fwhm (float): fwhm of the Gaussian kernel, in arcsec. size (int, ndarray): size of the requested kernel along each axis.

If ``size’’ is a scalar number the final kernel will be a square of side ``size’’. If ``size’’ has two element they must be in (y_size, x_size) order. In each case size must be an integer number of pixels.

scale (float): pixel scale of the image **kwargs: is there to absorb any additional parameter which could be

provided (but won’t be used).

pymusepipe.cube_convolve.moffat_kernel(fwhm, size, n=1.0, scale=0.2)[source]

Moffat kernel. Returns a Moffat function array according to given input parameters. Using astropy Moffat2DKernel.

Parameters:
  • fwhm (float) – fwhm of the Moffat kernel, in arcsec.

  • n (float) – power index of the Moffat

  • size (int numpy array) – size of the requested kernel along each axis. If ``size’’ is a scalar number the final kernel will be a square of side ``size’’. If ``size’’ has two element they must be in (y_size, x_size) order. In each case size must be an integer number of pixels.

  • scale (float) – pixel scale of the image [optional]

pymusepipe.cube_convolve.psf2d(size, fwhm, function='gaussian', nmoffat=None, scale=0.2)[source]

Create a model of the target PSF of the convolution. The target PSF does not vary as a function of wavelenght, therefore the output is a 2D array.

Parameters
size: int, array-like

the size of the final array. If ``size’’ is a scalar number the kernel will be a square of side ``size’’. If ``size’’ has two elements they must be in (y_size, x_size) order.

fwhm: float

the FWHM of the psf

function: str, optional

the function to model the target PSF. Only ‘gaussian’ or ‘moffat’ are accepted. Default: ‘gaussian’

nmoffat (float): Moffat power index. It must be defined if function = ‘moffat’.

Default: None

scale: float, optional

the spatial scale of the final kernel

Returns
target: np.ndarray

a 2D array with the model of the target PSF.

pymusepipe.cube_convolve.psf3d(wave, size, fwhm0, lambda0=6483.58, b=-3e-05, scale=0.2, nmoffat=None, function='moffat')[source]

Function to create the cube with the lambda dependent PSF, following a given slope and nominal wavelength.

Parameters
wave: np.ndarray

array with the wavelength axis of the datacube

size: int, array-like

the size of the 2D PSF. If ``size’’ is a scalar number the 2D PSF kernel will be a square of side ``size’’. If ``size’’ has two element they must be in (y_size, x_size) order.

fwhm0: float

the fwhm at the reference wavelength in arcseconds.

n: float

Power index of the Moffat profile. It is usually 2.8 for NOAO cubes and 2.3 for AO cubes.

lambda0: float

reference wavelength at which fwhm0 is measured. Default: 6483.58. (It’s the average wavelength for the WFI_BB filter)

b: float, optional

the steepness of the relation between wavelength and FWHM. Default: -3e-5 (arcsec/A) (From MUSE team)

scale: float, optional

spatial scale of the new datacube in arcsec. Default: 0.2 (MUSE spatial resolution).

function (str): ‘moffat’ or ‘gaussian’

Returns
psf_cube: np.array

Datacube containing MUSE PSF as a function of wavelength.

pymusepipe.cube_convolve.pypher_script(psf_source, psf_target, pixscale_source=0.2, pixscale_target=0.2, angle_source=0.0, angle_target=0.0, reg_fact=0.0001, verbose=False)[source]

calculate the convolution kernel to move from one PSF to a target one. This is an adaptation of the main pypher script that it is meant to be used from the terminal.

Parameters:
  • psf_source (ndarray) – 2D PSF of the source image.

  • psf_target (ndarray) – target 2D PSF

  • pixscale_source (float) – pixel scale of the source PSF [0.2]

  • pixscale_target (float) – pixel scale of the target PSF [0.2]

  • angle_source (float) – position angle of the source PSF. [0]

  • angle_target (float) – position angle of the target PSF. [0]

  • reg_fact (float) – Regularisation parameter for the Wiener filter [1.e-4]

  • verbose (bool) – If True it prints more info on screen [False]

Returns:

a 2D kernel that convolved with the source PSF

returns the target PSF

Return type:

kernel

pymusepipe.emission_lines module

pymusepipe.emission_lines.print_emission_lines()[source]

Printing the names of the various emission lines

pymusepipe.graph_pipe module

MUSE-PHANGS plotting routines

class pymusepipe.graph_pipe.GraphMuse(pdf_name='drs_check.pdf', figsize=(10, 14), rect_layout=[0, 0.03, 1, 0.95], verbose=True)[source]

Bases: object

Graphic output to check MUSE data reduction products

close()[source]
plot_page(list_data)[source]

Plot a set of blocks, each made of a set of spectra or images. This is for 1 page It first counts the number of lines needed according to the separation for images (default is 2 per line, each image taking 2 lines) and spectra (1 spectrum per line over 2 columns)

plot_set_images(set_of_images=None)[source]

Plotting a set of images Skipping the ones that are ‘None’

plot_set_spectra(set_of_spectra=None)[source]

Plotting a set of spectra Skipping the ones that are ‘None’

savepage()[source]
start_page()[source]

Start the page

pymusepipe.graph_pipe.open_new_wcs_figure(nfig, mywcs=None)[source]

Open a new figure (with number nfig) with given wcs. If not WCS is provided, just opens a subplot in that figure.

Input

nfigint

Figure number to consider

mywcsastropy.wcs.WCS

Input WCS to open a new figure (Default value = None)

returns:

Figure itself with the subplots using the wcs projection

rtype:

fig, subplot

pymusepipe.graph_pipe.plot_compare_contours(data1, data2, plotwcs=None, labels=('Data1', 'Data2'), levels=None, nlevels=10, fignum=1, namefig='dummy_contours.png', figfolder='', savefig=False, **kwargs)[source]

Creates a plot with the contours of two input datasets for comparison

Input

data1 data2: 2d np.arrays

Input arrays to compare

plotwcs: WCS

WCS used to set the plot if provided

labels: tuple/list of 2 str

Labels for the plot

levels: list of floats

Levels to be used for the contours. Calculated if None.

fignum: int

Number for the figure

namefig: str

Name of the figure to be saved (if savefig is True)

figfolder: str

Name of the folder for the figure

savefig: bool

If True, will save the figure as namefig

Creates

Plot with contours of the two input dataset

pymusepipe.graph_pipe.plot_compare_cuts(data1, data2, labels=('X', 'Y'), figfolder='', fignum=1, namefig='dummy_polypar.png', ncuts=11, savefig=False, **kwargs)[source]

Input

data1 data2 label1 label2 figfolder fignum namefig savefig kwargs

Creates

Plot with a comparison of the two data arrays using regular X and Y cuts

pymusepipe.graph_pipe.plot_compare_diff(data1, data2, plotwcs=None, figfolder='', percentage=5, fignum=1, namefig='dummy_diff.ong', savefig=False, **kwargs)[source]
Parameters:
  • data1

  • data2

  • figfolder

  • fignum

  • namefig

  • savefig

  • kwargs

pymusepipe.graph_pipe.plot_polypar(polypar, labels=('Data 1', 'Data 2'), figfolder='', fignum=1, namefig='dummy_polypar.png', savefig=False, **kwargs)[source]

Creating a plot showing the normalisation arising from a polypar object

Parameters:
  • polypar

  • label1

  • label2

  • foldfig

  • namefig

pymusepipe.graph_pipe.print_fig(text)[source]

pymusepipe.init_musepipe module

MUSE-PHANGS pipeline wrapper initialisation of folders

class pymusepipe.init_musepipe.InitMuseParameters(folder_config='Config/', rc_filename=None, cal_filename=None, verbose=True, **kwargs)[source]

Bases: object

init_default_param(dict_param)[source]

Initialise the parameters as defined in the input dictionary Hardcoded in config_pipe.py

Input

dict_param: dict

Input dictionary defining the attributes

read_param_file(filename, dict_param)[source]

Reading an input parameter initialisation file

pymusepipe.init_musepipe.add_suffix_tokeys(dic, suffix='_folder')[source]

pymusepipe.mpdaf_pipe module

MUSE-PHANGS mpdaf-functions module

class pymusepipe.mpdaf_pipe.BasicFile(filename, **kwargs)[source]

Bases: object

Basic file with just the name and some properties to attach to that Cube

class pymusepipe.mpdaf_pipe.BasicPSF(function='gaussian', fwhm0=0.0, nmoffat=2.8, b=0.0, l0=6483.58, psf_array=None)[source]

Bases: object

Basic PSF function and parameters

property psf_array
class pymusepipe.mpdaf_pipe.MuseCube(source=None, verbose=False, **kwargs)[source]

Bases: Cube

Wrapper around the mpdaf Cube functionalities

astropy_convolve(other, fft=True, inplace=False)[source]

Convolve a DataArray with an array of the same number of dimensions using a specified convolution function.

Copy of _convolve for a cube, but doing it per slice or not

Masked values in self.data and self.var are replaced with zeros before the convolution is performed. However masked pixels in the input data remain masked in the output.

Any variances in self.var are propagated correctly.

If self.var exists, the variances are propagated using the equation:

result.var = self.var (*) other**2

where (*) indicates convolution. This equation can be derived by applying the usual rules of error-propagation to the discrete convolution equation.

Uses `astropy.convolution.convolve_fft’ or ‘astropy.convolution.convolve’

Parameters:
  • fft (boolean) –

    The convolution function to use, chosen from:

    • `astropy.convolution.convolve_fft’

    • `astropy.convolution.convolve’

    In general convolve_fft() is faster than convolve() except when other.data only contains a few pixels. However convolve_fft uses a lot more memory than convolve(), so convolve() is sometimes the only reasonable choice. In particular, convolve_fft allocates two arrays whose dimensions are the sum of self.shape and other.shape, rounded up to a power of two. These arrays can be impractically large for some input data-sets.

  • other (DataArray or numpy.ndarray) –

    The array with which to convolve the contents of self. This must have the same number of dimensions as self, but it can have fewer elements. When this array contains a symmetric filtering function, the center of the function should be placed at the center of pixel, (other.shape - 1)//2.

    Note that passing a DataArray object is equivalent to just passing its DataArray.data member. If it has any variances, these are ignored.

  • inplace (bool) – If False (the default), return a new object containing the convolved array. If True, record the convolved array in self and return self.

Return type:

~mpdaf.obj.DataArray

build_filterlist_images(filter_list, prefix='IMAGE_FOV', suffix='', folder=None, **kwargs)[source]
Parameters:
  • filter_list

  • prefix

  • suffix

  • folder

  • **kwargs

Returns:

convolve_cube_to_psf(target_fwhm, target_nmoffat=None, target_function='gaussian', outcube_folder=None, outcube_name=None, factor_fwhm=3, fft=True, erode_edges=True, npixels_erosion=2)[source]

Convolve the cube for a target function ‘gaussian’ or ‘moffat’

Parameters:
  • target_fwhm (float) – target FWHM in arcsecond

  • target_nmoffat – target n if Moffat function

  • target_function (str) – ‘gaussian’ or ‘moffat’ [‘gaussian’]

  • factor_fwhm (float) – number of FWHM for size of Kernel

  • fft (bool) – use FFT to convolve or not [False]

  • perslice (bool) – doing it per slice, or not [True] If doing it per slice, using a direct astropy fft. If doing it with the cube, it uses much more memory but is more efficient as the convolution is done via mpdaf directly.

Creates:

Folder and convolved cube names

create_reference_cube(lambdamin=4700, lambdamax=9400, step=1.25, outcube_name=None, filter_for_nan=False, **kwargs)[source]

Create a reference cube using an input one, and overiding the lambda part, to get a new WCS

Parameters:
  • lambdamin

  • lambdamax

  • step

  • outcube_name

  • filter_for_nan

  • **kwargs

Returns:

the name of the folder where

the output cube is, and its name

Return type:

cube_folder, outcube_name (str, str)

extract_onespectral_cube(wave1=6500.0, outcube_name=None, **kwargs)[source]

Create a single pixel cube extracted from this one.

Parameters:
  • wave1 (float) – Value of the wavelength to extract. In Angstroems.

  • outcube_name (str) – Name of the output cube

  • prefix (str) – If outcube_name is None (default), use that prefix to append in front of the input cube name (same folder)

Returns:

  • A new cube with only 2 lambda. To be used as a WCS reference for

  • masks.

get_emissionline_image(line=None, velocity=0.0, redshift=None, lambda_window=10.0, medium='vacuum')[source]

Get a narrow band image around Ha

Input

lambda_window: in Angstroems (10 by default). Width of the window of integration medium: vacuum or air (string, ‘vacuum’ by default) velocity: default is 0. (km/s) redshift: default is None. Overwrite velocity if provided. line: name of the emission line (see emission_lines dictionary)

get_filter_image(filter_name=None, own_filter_file=None, filter_folder='', dict_filters=None)[source]

Get an image given by a filter. If the filter belongs to the filter list, then use that, otherwise use the given file

get_image_from_cube(central_lambda=None, lambda_window=0)[source]

Get image from integrated cube, with spectral pixel centred at central_lambda and with a lambda_window of lambda_window

get_quadrant_spectra_from_cube(pixel_window=0)[source]

Get quadrant spectra from the Cube

Input

pixel_window : pixel_window of integration

get_set_spectra()[source]

Get a set of standard spectra from the Cube

get_spectrum_from_cube(nx=None, ny=None, pixel_window=0, title='Spectrum')[source]

Get a spectrum from the cube with centre defined in pixels with nx, ny and a window of ‘pixel_window’

get_whiteimage_from_cube()[source]
mask_trail(pq1=[0, 0], pq2=[10, 10], width=1.0, margins=0.0, reset=False, save=True, **kwargs)[source]

Build a cube mask from 2 points measured from a trail on an image

Input

pq1: array or tuple (float)

p and q coordinates of point 1 along the trail

pq2: array or tuple (float)

p and q coordinates of point 2 along the trail

width: float

Value (in pixel) of the full slit width to exclude

margins: float

Value (in pixel) to extend the slit beyond the 2 extrema If 0, this means limiting it to the extrema themselves. Default is None, which mean infinitely long slit

reset (bool): if True, reset the mask before masking the slit save (bool): if True, save the masked cube

rebin_spatial(factor, mean=False, inplace=False, full_covariance=False, **kwargs)[source]

Combine neighboring pixels to reduce the size of a cube by integer factors along each axis.

Each output pixel is the mean of n pixels, where n is the product of the reduction factors in the factor argument. Uses mpdaf rebin function, but add a normalisation factor if mean=False (sum). It also updates the unit by just copying the old one.

Input

factor(int or (int,int))

Factor by which the spatial dimensions are reduced

meanbool

If True, taking the mean, if False (default) summing

inplacebool

If False (default) making a copy. Otherwise using the present cube.

full_covariance: bool

If True, will assume that spaxels are fully covariant. This means that the variance will be normalised by sqrt(N) where N is the number of summed spaxels. Default is False

returns:

Cube

rtype:

rebinned cube

save_mask(mask_name='dummy_mask.fits')[source]

Save the mask into a 0-1 image

class pymusepipe.mpdaf_pipe.MuseCubeMosaic(ref_wcs, folder_ref_wcs='', folder_cubes='', prefix_cubes='DATACUBE_FINAL_WCS', list_suffix=[], use_fixed_cubes=True, excluded_suffix=[], included_suffix=[], prefix_fixed_cubes='tmask', verbose=False, dict_exposures=None, dict_psf={}, list_cubes=None)[source]

Bases: CubeMosaic

build_list(folder_cubes=None, prefix_cubes=None, list_cubes=None, **kwargs)[source]

Building the list of cubes to process

Parameters:
  • folder_cubes (str) – folder for the cubes

  • prefix_cubes (str) – prefix to be used

convolve_cubes(target_fwhm, target_nmoffat=None, target_function='gaussian', suffix='conv', **kwargs)[source]
Parameters:
  • target_fwhm

  • target_nmoffat

  • input_function

  • target_function

  • suffix

  • **kwargs

Returns:

property cube_names
property list_cubes
madcombine(folder_cubes=None, outcube_name='dummy.fits', fakemode=False, mad=True)[source]

Combine the CubeMosaic and write it out.

Parameters:
  • folder_cubes (str) – name of the folder for the cube [None]

  • outcube_name (str) – name of the outcube

  • mad (bool) – using mad or not [True]

Creates:

A new cube, combination of all input cubes listes in CubeMosaic

property ncubes
print_cube_names()[source]
class pymusepipe.mpdaf_pipe.MuseFilter(filter_name='Cousins_R', filter_fits_file='filter_list.fits', filter_ascii_file=None)[source]

Bases: object

read()[source]

Reading the data in the file

class pymusepipe.mpdaf_pipe.MuseImage(source=None, **kwargs)[source]

Bases: Image

Wrapper around the mpdaf Image functionalities

get_fwhm_startend()[source]

Get range of FWHM

mask_trail(pq1=[0, 0], pq2=[10, 10], width=0.0, reset=False, extent=None)[source]

Build an image mask from 2 points measured from a trail

Input

pq1: array or tuple (float)

p and q coordinates of point 1 along the trail

pq2: array or tuple (float)

p and q coordinates of point 2 along the trail

width: float

Value (in pixel) of the full slit width to exclude

extent: float

Value (in pixel) to extend the slit beyond the 2 extrema If 0, this means limiting it to the extrema themselves. Default is None, which mean infinitely long slit

reset_mask()[source]

Resetting the Image mask

save_mask(mask_name='dummy_mask.fits')[source]

Save the mask into a 0-1 image

class pymusepipe.mpdaf_pipe.MuseSetImages(*args, **kwargs)[source]

Bases: list

Set of images

update(**kwargs)[source]
class pymusepipe.mpdaf_pipe.MuseSetSpectra(*args, **kwargs)[source]

Bases: list

Set of spectra

update(**kwargs)[source]
class pymusepipe.mpdaf_pipe.MuseSkyContinuum(filename)[source]

Bases: object

get_normfactor(background, filter_name='Cousins_R')[source]

Get the normalisation factor given a background value Takes the background value and the sky continuuum spectrum and convert this to the scaling Ks needed for this sky continuum The principle relies on having the background measured as: MUSE_calib = ((MUSE - Sky_cont) + Background) * Norm

as measured from the alignment procedure.

Since we want: MUSE_calib = ((MUSE - Ks * Sky_cont) + 0) * Norm

This means that: Ks * Sky_cont = Sky_cont - Background ==> Ks = 1 - Background / Sky_cont

So we integrate the Sky_cont to get the corresponding S value and then provide Ks as 1-B/S

Input

background: float

Value of the background to consider

filter_name: str

Name of the filter to consider

integrate(muse_filter, ao_mask=False)[source]

Integrate a sky continuum spectrum using a certain filter file. If the file is a fits file, use it as the MUSE filter list. Otherwise use it as an ascii file

Input

muse_filter: MuseFilter

read()[source]

Read sky continuum spectrum from MUSE data reduction

save_normalised(norm_factor=1.0, prefix='norm', overwrite=False)[source]

Normalises a sky continuum spectrum and save it within a new fits file

Input

norm_factor: float

Scale factor to multiply the input continuum

prefix: str

Prefix for the new continuum fits name. Default is ‘norm’, so that the new file is ‘norm_oldname.fits’

overwrite: bool

If True, existing file will be overwritten. Default is False.

class pymusepipe.mpdaf_pipe.MuseSpectrum(source=None, **kwargs)[source]

Bases: Spectrum

Wrapper around the mpdaf Spectrum functionalities

class pymusepipe.mpdaf_pipe.PixTableToMask(pixtable_name, image_name, suffix_out='tmask')[source]

Bases: object

This class is meant to just be a simple tool to mask out some regions from the PixTable using Image masks

create_mask(pq1=[0, 0], pq2=[10, 10], width=0.0, reset=False, mask_name='dummy_mask.fits', extent=None, **kwargs)[source]

Create the mask and save it in one go

Input

pq1: array or tuple (float)

p and q coordinates of point 1 along the trail

pq2: array or tuple (float)

p and q coordinates of point 2 along the trail

width: float

Value (in pixel) of the full slit width to exclude

reset: bool

By default False, so the mask goes on top of the existing one If True, will reset the mask before building it.

extent: float

Value (in pixel) to extend the slit beyond the 2 extrema If 0, this means limiting it to the extrema themselves. Default is None, which mean infinitely long slit

imshow(**kwargs)[source]

Just showing the image

mask_pixtable(mask_name=None, **kwargs)[source]

Use the Image Mask and create a new Pixtable

Input

mask_name: str

Name of the mask to be used (FITS file)

use_folder: bool

If True, use the same folder as the Pixtable Otherwise just write where you stand

suffix_out: str

Suffix for the name of the output Pixtable If provided, will overwrite the one in self.suffix_out

save_mask(mask_name='dummy_mask.fits', use_folder=True)[source]

Saving the mask from the Image into a fits file

Input

mask_name: str

Name of the fits file for the mask

use_folder: bool

If True (default) will look for the mask in the image_folder. If False, will just look for it where the command is run.

Creates

A fits file with the mask as 0 and 1

pymusepipe.mpdaf_pipe.get_sky_spectrum(specname)[source]

Read sky spectrum from MUSE data reduction

pymusepipe.mpdaf_pipe.integrate_spectrum(spectrum, wave_filter, throughput_filter, ao_mask=False)[source]

Integrate a spectrum using a certain Muse Filter file.

Input

spectrum: Spectrum

Input spectrum given as an mpdaf Spectrum

wave_filter: float array

Array of wavelength for the filter

throughput_filter: float array

Array of throughput (between 0 and 1) for the filter. Should be the same dimension (1D, N floats) as wave_filter

pymusepipe.musepipe module

MUSE-PHANGS core module. This defines the main class (MusePipe) which can be used throughout this package.

This module is a complete rewrite of a pipeline wrapper for the MUSE dataset. All classes and objects were refactored.

However, the starting point of this package has been initially inspired by several pieces of python codes developed by various individiduals, including Kyriakos and Martina from the GTO MUSE MAD team and further rewritten by Mark van den Brok. Hence: a big Thanks to all three for this!

Note that several python packages exist which would provide similar (or better) functionalities.

Eric Emsellem adapted a version from early 2017, provided by Mark and adapted it for the needs of the PHANGS project (PI Schinnerer). It was further refactored starting from scratch but keeping a few initial ideas.

class pymusepipe.musepipe.MusePipe(targetname=None, dataset=1, folder_config='Config/', rc_filename=None, cal_filename=None, log_filename='MusePipe.log', verbose=True, musemode='WFM-NOAO-N', checkmode=True, strong_checkmode=False, **kwargs)[source]

Bases: PipePrep, PipeRecipes

Main Class to define and run the MUSE pipeline, given a certain galaxy name. This is the main class used throughout the running of the pipeline which contains functions and attributes all associated with the reduction of MUSE exposures.

It inherits from the PipePrep class, which prepares the recipes for the running of the MUSE pipeline, and Piperecipes which has the recipes described.

goto_folder(newpath, addtolog=False, **kwargs)[source]

Changing directory and keeping memory of the old working one

Parameters:
  • newpath (str) – Path where to go to

  • addtolog (bool [False]) – Adding the folder move to the log file

goto_origfolder(addtolog=False)[source]

Go back to original folder

goto_prevfolder(addtolog=False)[source]

Go back to previous folder

Parameters:

addtolog (bool [False]) – Adding the folder move to the log file

init_raw_table(reset=False, **kwargs)[source]

Create a fits table with all the information from the Raw files. Also create an astropy table with the same info

Parameters:

reset (bool [False]) – Resetting the raw astropy table if True

property musemode

Mode for MUSE

print_musemodes()[source]

Print out the list of allowed muse modes

read_all_astro_tables(reset=False)[source]

Initialise all existing Astropy Tables

read_astropy_table(expotype=None, stage='master')[source]

Read an existing Masterfile data table to start the pipeline

retrieve_geoastro_name(date_str, filetype='geo', fieldmode='wfm')[source]

Retrieving the astrometry or geometry fits file name

Parameters:
  • date_str (str) – Date as a string (DD/MM/YYYY)

  • filetype (str) – ‘geo’ or ‘astro’, type of the needed file

  • fieldmode (str) – ‘wfm’ or ‘nfm’ - MUSE mode

save_expo_table(expotype, tpl_gtable, stage='master', fits_tablename=None, aggregate=True, suffix='', overwrite=None, update=None)[source]

Save the Expo (Master or not) Table corresponding to the expotype

set_fullpath_names()[source]

Create full path names to be used

sort_raw_tables(checkmode=None, strong_checkmode=None)[source]

Provide lists of exposures with types defined in the dictionary after excluding those with the wrong MUSE mode if checkmode is set up.

Input

checkmode: boolean

Checking the MUSE mode or not. Default to None, namely it won’t use the value set here but the value predefined in self.checkmode.

strong_checkmode: boolean

Strong check, namely in case you still wish to force the MUSE mode even for files which are not mode specific (e.g., BIAS). Default to None, namely it uses the self.strong_checkmode which was already set up at start.

class pymusepipe.musepipe.PipeObject(info=None)[source]

Bases: object

A very simple class used to store astropy tables.

pymusepipe.prep_recipes_pipe module

MUSE-PHANGS preparation recipe module

class pymusepipe.prep_recipes_pipe.PipePrep(first_recipe=1, last_recipe=None)[source]

Bases: SofPipe

PipePrep class prepare the SOF files and launch the recipes

get_align_group(name_ima_reference=None, list_expo=[], line=None, suffix='', bygroup=False, **kwargs)[source]

Extract the needed information for a set of exposures to be aligned

static print_recipes()[source]

Printing the list of recipes

run_align_bydataset(sof_filename='exp_align_bydataset', expotype='OBJECT', list_expo=[], stage='processed', line=None, suffix='', tpl='ALL', **kwargs)[source]

Aligning the individual exposures from a dataset using the emission line region With the muse exp_align routine

Parameters:
  • sof_filename (string (without the file extension)) – Name of the SOF file which will contain the Bias frames

  • tpl (ALL by default or a special tpl time) –

run_align_bygroup(sof_filename='exp_align_bygroup', expotype='OBJECT', list_expo=[], stage='processed', line=None, suffix='', tpl='ALL', **kwargs)[source]

Aligning the individual exposures from a dataset using the emission line region With the muse exp_align routine

Parameters:
  • sof_filename (string (without the file extension)) – Name of the SOF file which will contain the Bias frames

  • tpl (ALL by default or a special tpl time) –

run_autocal_sky(sof_filename='scipost', expotype='SKY', AC_suffix='_AC', tpl='ALL', **extra_kwargs)[source]

Launch the scipost command to get individual exposures in a narrow band filter

run_bias(sof_filename='bias', tpl='ALL', update=None)[source]

Reducing the Bias files and creating a Master Bias Will run the esorex muse_bias command on all Biases

Parameters:
  • sof_filename (string (without the file extension)) – Name of the SOF file which will contain the Bias frames

  • tpl (ALL by default or a special tpl time) –

run_check_align(name_offset_table, sof_filename='scipost', expotype='OBJECT', tpl='ALL', line=None, suffix='', folder_offset_table=None, **extra_kwargs)[source]

Launch the scipost command to get individual exposures in a narrow band filter to check if the alignments are ok (after rotation and using a given offset_table)

run_combine_dataset(sof_filename='exp_combine', expotype='OBJECT', list_expo=[], stage='processed', tpl='ALL', lambdaminmax=[4000.0, 10000.0], suffix='', **kwargs)[source]

Produce a cube from all frames in the dataset list_expo or tpl specific arguments can still reduce the selection if needed

run_fine_alignment(name_ima_reference=None, nexpo=1, list_expo=[], line=None, bygroup=False, reset=False, **kwargs)[source]

Run the alignment on this dataset using or not a reference image

run_flat(sof_filename='flat', tpl='ALL', update=None)[source]

Reducing the Flat files and creating a Master Flat Will run the esorex muse_flat command on all Flats

Parameters:
  • sof_filename (string (without the file extension)) – Name of the SOF file which will contain the Bias frames

  • tpl (ALL by default or a special tpl time) –

run_lsf(sof_filename='lsf', tpl='ALL', update=None)[source]

Reducing the LSF files and creating the LSF PROFILE Will run the esorex muse_lsf command on all Flats

Parameters:
  • sof_filename (string (without the file extension)) – Name of the SOF file which will contain the Bias frames

  • tpl (ALL by default or a special tpl time) –

run_phangs_recipes(fraction=0.8, illum=True, skymethod='model', **kwargs)[source]

Running all PHANGS recipes in one shot Using the basic set up for the general list of recipes

Input

fraction: float

Fraction of sky to consider in sky frames for the sky spectrum Default is 0.8.

illum: bool

Default is True (use illumination during twilight calibration)

skymethod: str

Default is “model”.

run_prep_align(sof_filename='scipost', expotype='OBJECT', tpl='ALL', line=None, suffix='', **extra_kwargs)[source]

Launch the scipost command to get individual exposures in a narrow band filter

run_recipes(**kwargs)[source]

Running all recipes in one shot

Input

fraction: float

Fraction of sky to consider in sky frames for the sky spectrum Default is 0.8.

illum: bool

Default is True (use illumination during twilight calibration)

skymethod: str

Default is “model”.

filter_for_alignment: str

Default is defined in config_pipe

line: str

Default is None as defined in config_pipe

lambda_window: float

Default is 10.0 as defined in config_pipe

run_scibasic(sof_filename='scibasic', expotype='OBJECT', tpl='ALL', illum=True, update=True, overwrite=True)[source]

Reducing the files of a certain category and creating the PIXTABLES Will run the esorex muse_scibasic

Parameters:
  • sof_filename (string (without the file extension)) – Name of the SOF file which will contain the Bias frames

  • tpl (ALL by default or a special tpl time) –

run_scibasic_all(list_object=['OBJECT', 'SKY', 'STD'], tpl='ALL', illum=True, **kwargs)[source]

Running scibasic for all objects in list_object Making different sof for each category

run_scipost(sof_filename='scipost', expotype='OBJECT', tpl='ALL', stage='processed', list_expo=[], lambdaminmax=[4000.0, 10000.0], suffix='', **kwargs)[source]

Scipost treatment of the objects Will run the esorex muse_scipost routine

Input

sof_filename: string (without the file extension)

Name of the SOF file which will contain the Bias frames

tpl: ALL by default or a special tpl time list_expo: list of integers

Exposure numbers. By default, an empty list which means that all exposures will be used.

lambdaminmax: tuple of 2 floats

Minimum and Maximum wavelength to pass to the muse_scipost recipe

suffix: str

Suffix to add to the input pixtables.

norm_skycontinuum: bool

Normalise the skycontinuum or not. Default is False. If normalisation is to be done, it will use the offset_table and the tabulated background value to renormalise the sky continuum.

skymethod: str

Type of skymethod. See MUSE manual.

offset_list: bool

If True, using an OFFSET list. Default is True.

name_offset_table: str

Name of the offset table table. If not provided, will use the default name produced during the pipeline run.

filter_for_alignment: str

Name of the filter used for alignment. Default is self.filter_for_alignment

filter_list: str

List of filters to be considered for reconstructed images. By Default will use the list in self.filter_list.

run_scipost_perexpo(sof_filename='scipost', expotype='OBJECT', tpl='ALL', stage='processed', suffix='', offset_list=False, **kwargs)[source]

Launch the scipost command exposure per exposure

Input

See run_scipost parameters

run_scipost_sky()[source]

Run scipost for the SKY with no offset list and no skymethod

run_sky(sof_filename='sky', tpl='ALL', fraction=0.8, update=None, overwrite=True)[source]

Reducing the SKY after they have been scibasic reduced Will run the esorex muse_create_sky routine

Parameters:
  • sof_filename (string (without the file extension)) – Name of the SOF file which will contain the Bias frames

  • tpl (ALL by default or a special tpl time) –

run_standard(sof_filename='standard', tpl='ALL', update=None, overwrite=True)[source]

Reducing the STD files after they have been obtained Running the muse_standard routine

Parameters:
  • sof_filename (string (without the file extension)) – Name of the SOF file which will contain the Bias frames

  • tpl (ALL by default or a special tpl time) –

run_twilight(sof_filename='twilight', tpl='ALL', update=None, illum=True)[source]

Reducing the files and creating the TWILIGHT CUBE. Will run the esorex muse_twilight command on all TWILIGHT

Parameters:
  • sof_filename (string (without the file extension)) – Name of the SOF file which will contain the Bias frames

  • tpl (ALL by default or a special tpl time) –

run_wave(sof_filename='wave', tpl='ALL', update=None)[source]

Reducing the WAVE-CAL files and creating the Master Wave Will run the esorex muse_wave command on all Flats

Parameters:
  • sof_filename (string (without the file extension)) – Name of the SOF file which will contain the Bias frames

  • tpl (ALL by default or a special tpl time) –

save_fine_alignment(name_offset_table=None)[source]

Save the fine dataset alignment

select_tpl_files(expotype=None, tpl='ALL', stage='raw')[source]

Selecting a subset of files from a certain type

pymusepipe.prep_recipes_pipe.add_listpath(suffix, paths)[source]

Add a suffix to a list of path and normalise them

pymusepipe.prep_recipes_pipe.norm_listpath(paths)[source]

Normalise the path for a list of paths

pymusepipe.prep_recipes_pipe.print_my_function_name(f)[source]

Function to provide a print of the name of the function Can be used as a decorator

pymusepipe.recipes_pipe module

MUSE-PHANGS recipe module

class pymusepipe.recipes_pipe.PipeRecipes(nifu=-1, first_cpu=0, ncpu=24, list_cpu=[], likwid='likwid-pin -c N:', fakemode=False, domerge=True, nocache=False, nochecksum=True)[source]

Bases: object

PipeRecipes class containing all the esorex recipes for MUSE data reduction

property checksum
property esorex
joinprod(name)[source]
property merge
recipe_align(sof, dir_products, namein_products, nameout_products, tpl, group, threshold=10.0, srcmin=3, srcmax=80, fwhm=5.0)[source]

Running the muse_exp_align recipe

recipe_bias(sof, dir_bias, name_bias, tpl)[source]

Running the esorex muse_bias recipe

recipe_combine(sof, dir_products, name_products, tpl, expotype, suffix_products='', suffix_prefinalnames='', prefix_products='', save='cube', pixfrac=0.6, suffix='', format_out='Cube', filter_list='white', lambdamin=4000.0, lambdamax=10000.0)[source]

Running the muse_exp_combine recipe for one single dataset

recipe_combine_pointings(sof, dir_products, name_products, suffix_products='', suffix_prefinalnames='', prefix_products='', save='cube', pixfrac=0.6, suffix='', format_out='Cube', filter_list='white', lambdamin=4000.0, lambdamax=10000.0)[source]

Running the muse_exp_combine recipe for pointings

recipe_flat(sof, dir_flat, name_flat, dir_trace, name_trace, tpl)[source]

Running the esorex muse_flat recipe

recipe_lsf(sof, dir_lsf, name_lsf, tpl)[source]

Running the esorex muse_lsf recipe

recipe_scibasic(sof, tpl, expotype, dir_products=None, name_products=[], suffix='')[source]

Running the esorex muse_scibasic recipe

recipe_scipost(sof, tpl, expotype, dir_products='', name_products=[''], suffix_products=[''], suffix_prefinalnames=[''], suffix_postfinalnames=[''], list_expo=[], save='cube,skymodel', filter_list='white', skymethod='model', pixfrac=0.8, darcheck='none', skymodel_frac=0.05, astrometry='TRUE', lambdamin=4000.0, lambdamax=10000.0, suffix='', autocalib='none', rvcorr='bary', **kwargs)[source]

Running the esorex muse_scipost recipe

recipe_sky(sof, dir_sky, name_sky, tpl, iexpo=1, fraction=0.8)[source]

Running the esorex muse_stc recipe

recipe_std(sof, dir_std, name_std, tpl)[source]

Running the esorex muse_stc recipe

recipe_twilight(sof, dir_twilight, name_twilight, tpl)[source]

Running the esorex muse_twilight recipe

recipe_wave(sof, dir_wave, name_wave, tpl)[source]

Running the esorex muse_wavecal recipe

run_oscommand(command, log=True)[source]

Running an os.system shell command Fake mode will just spit out the command but not actually do it.

write_errlogfile(text)[source]

Writing in log file

write_logfile(text, addext='')[source]

Writing in log file

write_outlogfile(text)[source]

Writing in log file

pymusepipe.target_sample module

MUSE-PHANGS target sample module

class pymusepipe.target_sample.MusePipeSample(TargetDic, rc_filename=None, cal_filename=None, folder_config='', first_recipe=1, **kwargs)[source]

Bases: object

combine_target(targetname=None, **kwargs)[source]

Run the combine recipe. Shortcut for combine[targetname].run_combine()

combine_target_per_pointing(targetname=None, wcs_from_pointing=True, **kwargs)[source]

Run the combine recipe. Shortcut for combine[targetname].run_combine()

convolve_mosaic_per_pointing(targetname=None, list_pointings=None, dict_psf={}, target_fwhm=0.0, target_nmoffat=None, target_function='gaussian', suffix=None, best_psf=True, min_dfwhm=0.2, fakemode=False, **kwargs)[source]

Convolve the datacubes listed in a mosaic with some target function and FWHM. It will try to homogeneise all individual cubes to that target PSF.

Parameters:
  • targetname (str) – name of the target

  • list_pointings (list) – list of pointing numbers for the list of pointings to consider

  • dict_psf (dict) – dictionary providing individual PSFs per pointing

  • target_fwhm (float) – target FWHM for the convolution [arcsec]

  • target_nmoffat (float) – tail factor for the moffat function.

  • target_function (str) – ‘moffat’ or ‘gaussian’ [‘gaussian’]

  • suffix (str) – input string to be added

  • best_psf (bool) – if True use the minimum overall possible value. If True it will overwrite all the target parameters.

  • min_dfwhm (float) – minimum difference to be added in quadrature [in arcsec]

  • filter_list (list) – list of filters to be used for reconstructing images

  • fakemode (bool) – if True, will only initialise parameters but not proceed with the convolution.

  • **kwargs

Returns:

create_reference_wcs(targetname=None, pointings_wcs=True, mosaic_wcs=True, reference_cube=True, ref_wcs=None, refcube_name=None, **kwargs)[source]

Run the combine for individual exposures first building up a mask.

finalise_reduction(targetname=None, rot_pixtab=False, create_wcs=True, create_expocubes=True, create_pixtables=True, create_pointingcubes=True, name_offset_table=None, folder_offset_table=None, dict_exposures=None, list_datasets=None, **kwargs)[source]

Finalise the reduction steps by using the offset table, rotating the pixeltables, then reconstructing the PIXTABLE_REDUCED, produce reference WCS for each pointing, and then run the reconstruction of the final individual cubes

Parameters:
  • targetname (str) –

  • rot_pixtab (bool) –

  • create_wcs (bool) –

  • create_expocubes (bool) –

  • name_offset_table (str) –

  • folder_offset_table (str) –

  • **kwargs

Returns:

init_combine(targetname=None, list_pointings=None, list_datasets=None, folder_offset_table=None, name_offset_table=None, **kwargs)[source]

Prepare the combination of targets

Input

targetname: str [None]

Name of target

list_pointings: list [or None=default= all pointings]

List of pointings (e.g., [1,2,3])

name_offset_table: str

Name of Offset table

init_mosaic(targetname=None, list_pointings=None, prefix_cubes='DATACUBE_FINAL_WCS', **kwargs)[source]

Prepare the combination of targets

Input

targetname: str [None]

Name of target

list_pointings: list [or None=default meaning all pointings]

List of pointings (e.g., [1,2,3])

mosaic(targetname=None, list_pointings=None, init_mosaic=True, build_cube=True, build_images=True, **kwargs)[source]
Parameters:
  • targetname

  • list_pointings

  • **kwargs

Returns:

reduce_all_targets(**kwargs)[source]

Reduce all targets already initialised

Input

first_recipe: int or str

One of the recipe to start with

last_recipe: int or str

One of the recipe to end with

reduce_target(targetname=None, list_datasets=None, **kwargs)[source]

Reduce one target for a list of datasets

Input

targetname: str

Name of the target

list_datasets: list

Dataset numbers. Default is None (meaning all datasets indicated in the dictonary will be reduced)

first_recipe: str or int [1] last_recipe: str or int [max of all recipes]

Name or number of the first and last recipes to process

reduce_target_postalign(targetname=None, list_datasets=None, **kwargs)[source]

Reduce target for all steps after pre-alignment

Input

targetname: str

Name of the target

list_datasets: list

Dataset numbers. Default is None (meaning all datasets indicated in the dictonary will be reduced)

reduce_target_prealign(targetname=None, list_datasets=None, **kwargs)[source]

Reduce target for all steps before pre-alignment (included)

Input

targetname: str

Name of the target

list_datasets: list

Dataset numbers. Default is None (meaning all datasets indicated in the dictonary will be reduced)

rotate_pixtables_target(targetname=None, list_datasets=None, folder_offset_table=None, name_offset_table=None, fakemode=False, **kwargs)[source]

Rotate all pixel table of a certain targetname and datasets

run_target_recipe(recipe_name, targetname=None, list_datasets=None, **kwargs)[source]

Run just one recipe on target

Input

recipe_name: str targetname: str

Name of the target

list_datasets: list

Pointing numbers. Default is None (meaning all datasets indicated in the dictonary will be reduced)

run_target_scipost_perexpo(targetname=None, list_datasets=None, folder_offset_table=None, name_offset_table=None, **kwargs)[source]

Build the cube per exposure using a given WCS

Parameters:
  • targetname

  • list_datasets

  • **kwargs

Returns:

set_pipe_target(targetname=None, list_datasets=None, **kwargs)[source]

Create the musepipe instance for that target and list of datasets

Input

targetname: str

Name of the target

list_datasets: list

Dataset numbers. Default is None (meaning all datasets indicated in the dictonary will be reduced)

config_args: dic

Dictionary including extra configuration parameters to pass to MusePipe. This allows to define a global configuration. If self.__phangs is set to True, this is overwritten with the default PHANGS configuration parameters as provided in config_pipe.py.

class pymusepipe.target_sample.MusePipeTarget(targetname='', subfolder='P100', list_datasets=None)[source]

Bases: object

class pymusepipe.target_sample.PipeDict(*args, **kwargs)[source]

Bases: dict

Dictionary with extra attributes

run_on_all_keys(funcname)[source]

Runs the given function on all the keys

setdefault(key, value=None)[source]

Insert key with a value of default if key is not in the dictionary.

Return the value for key if key is in the dictionary, else default.

update([E, ]**F) None.  Update D from dict/iterable E and F.[source]

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

pymusepipe.target_sample.insert_suffix(filename, suffix='')[source]

Create a new filename including the suffix in the name

Input

filename: str suffix: str

pymusepipe.target_sample.update_calib_file(filename, subfolder='', folder_config='')[source]

Update the rcfile with a new root

Input

filename: str

Name of the input filename

folder_config: str

Default is “”. Name of folder for filename

subfolder: str

Name of subfolder to add in the path

pymusepipe.util_pipe module

MUSE-PHANGS utility functions for pymusepipe

class pymusepipe.util_pipe.Circle_Zone[source]

Bases: Selection_Zone

Define a Circular zone, defined by a center and a radius

select(xin, yin)[source]

Define a selection within a circle

Input

xin, yin: 2d arrays

Input positions for the spaxels

class pymusepipe.util_pipe.ExposureInfo(targetname, dataset, tpl, nexpo)[source]

Bases: object

class pymusepipe.util_pipe.Rectangle_Zone[source]

Bases: Selection_Zone

Define a rectangular zone, given by a center, a length, a width and an angle

select(xin, yin)[source]
Define a selection within a rectangle

It can be rotated by an angle theta (in degrees)

Input

xin, yin: 2d arrays

Input positions for the spaxels

class pymusepipe.util_pipe.Selection_Zone(params=None)[source]

Bases: object

Parent class for Rectangle_Zone and Circle_Zone

Input

params: list of floats

List of parameters for the selection zone

class pymusepipe.util_pipe.TimeStampDict(description='', myobject=None)[source]

Bases: OrderedDict

Class which builds a time stamp driven dictionary of objects

create_new_timestamp(myobject=None)[source]

Create a new item in dictionary using a time stamp

delete_timestamp(tstamp=None)[source]

Delete a key in the dictionary

class pymusepipe.util_pipe.Trail_Zone[source]

Bases: Selection_Zone

Define a Trail zone, defined by two points and a width

select(xin, yin)[source]

Define a selection within trail

Input

xin, yin: 2d arrays

Input positions for the spaxels

pymusepipe.util_pipe.abspath(path)[source]

Normalise the path to get it short but absolute

pymusepipe.util_pipe.add_key_dataset_expo(imaname, iexpo, dataset)[source]

Add dataset and expo number to image

Input

imaname: str iexpo: int dataset: int

pymusepipe.util_pipe.add_string(text, word='_', loc=0)[source]

Adding string at given location Default is underscore for string which are not empty.

Input

text (str): input text word (str): input word to be added loc (int): location in ‘text’. [Default is 0=start]

If None, will be added at the end.

rtype:

Updated text

pymusepipe.util_pipe.analyse_musemode(musemode, field, delimiter='-')[source]

Extract the named field from the musemode

Input

musemode: str

Mode of the MUSE data to be analysed

field: str

Field to analyse (‘ao’, ‘field’, ‘lambda_range’)

delimiter: str

Character to delimit the fields to analyse

returns:

val – Value of the field which was analysed (e.g., ‘AO’ or ‘NOAO’)

rtype:

str

pymusepipe.util_pipe.append_file(filename, content)[source]

Append in ascii file

pymusepipe.util_pipe.chunk_stats(list_arrays, chunk_size=15)[source]

Cut the datasets in 2d chunks and take the median Return the set of medians for all chunks.

Parameters:
  • list_arrays (list of np.arrays) – List of arrays with the same sizes/shapes

  • chunk_size (int) – number of pixel (one D of a 2D chunk) of the chunk to consider (Default value = 15)

Returns:

median, standard – for the given datasets analysed in chunks.

Return type:

2 arrays of the medians and standard deviations

pymusepipe.util_pipe.create_time_name()[source]

Create a time-link name for file saving purposes

Return: a string including the YearMonthDay_HourMinSec

pymusepipe.util_pipe.crop_data(data, border=10)[source]

Crop a 2D data and return it cropped after a border has been removed (number of pixels) from each edge (borderx2 pixels are removed from each dimension)

Input

data: 2d array

Array which has the signal to be cropped

border: int

Number of pixels to be cropped at each edge

returns:

cdata – Cropped data array

rtype:

2d array

pymusepipe.util_pipe.doppler_shift(wavelength, velocity=0.0)[source]

Return the redshifted wavelength

pymusepipe.util_pipe.filter_list_with_pdict(input_list, list_datasets=None, dict_files=None, verbose=True)[source]

Filter out exposures (pixtab or cube namelist) using a dictionary which has a list of datasets and for each dataset a list of exposure number.

Parameters:
  • input_list (list of str) – input list to filter

  • dict_files (dict) – dictionary used to filter

Returns:

selected list of files exposure_list_per_pointing: selected list of files for each pointing

Return type:

selected_filename_list

pymusepipe.util_pipe.filter_list_with_suffix_list(list_names, included_suffix_list=[], excluded_suffix_list=[], name_list='')[source]
Parameters:
  • list_names (list of str) –

  • included_suffix_list (list of str) –

  • excluded_suffix_list (list of str) –

Returns:

pymusepipe.util_pipe.filtermed_image(data, border=0, filter_size=2)[source]

Process image by removing the borders and filtering it via a median filter

Input

data: 2d array

Array to be processed

border: int

Number of pixels to remove at each edge

filter_size: float

Size of the filtering (median)

returns:

cdata – Processed array

rtype:

2d array

pymusepipe.util_pipe.flatclean_image(data, border=10, dynamic_range=10, median_window=10, minflux=0.0, squeeze=True, remove_bkg=True)[source]

Process image by squeezing the range, removing the borders and filtering it. The image is first filtered, then it is cropped. All values below a given minimum are set to 0 and all Nan set to 0 or infinity accordingly.

Input

data: 2d array

Input array to process

dynamic_range: float [10]

Dynamic range used to squash the bright pixels down

median_window: int [10]

Size of the window used for the median filtering.

minflux: float [0]

Value of the minimum flux allowed.

squeeze: bool

Squeeze the dynamic range by using the dynamic_range variable

crop: bool

Crop the borders using border as the variable

remove_bkg: remove the filter_medianed background

pymusepipe.util_pipe.formatted_time()[source]

Return: a string including the formatted time

pymusepipe.util_pipe.get_dataset_name(dataset=1, str_dataset='OB', ndigits=3)[source]

Formatting for the dataset/pointing names using the number and the number of digits and prefix string

Input

dataset: int

Dataset (or Pointing) number

str_dataset: str

Prefix representing the dataset (or pointing)

ndigits: int

Number of digits to be used for formatting

rtype:

string for the dataset/pointing name prefix

pymusepipe.util_pipe.get_emissionline_band(line='Ha', velocity=0.0, redshift=None, medium='air', lambda_window=10.0)[source]

Get the wavelengths of an emission line, including a correction for the redshift (or velocity) and a lambda_window around that line (in Angstroems)

Parameters:
  • line (name of the line (string). Default is 'Ha') –

  • velocity (shift in velocity (km/s)) –

  • medium ('air' or 'vacuum') –

  • lambda_window (lambda_window in Angstroem) –

pymusepipe.util_pipe.get_emissionline_wavelength(line='Ha', velocity=0.0, redshift=None, medium='air')[source]

Get the wavelength of an emission line, including a correction for the redshift (or velocity)

pymusepipe.util_pipe.get_flux_range(data, border=15, low=2, high=98)[source]

Get the range of fluxes within the array by looking at percentiles.

Input

data: 2d array

Input array with signal to process

low, high: two floats (10, 99)

Percentiles to consider to filter

returns:

lperc, hperc – Low and high percentiles

rtype:

2 floats

pymusepipe.util_pipe.get_normfactor(array1, array2, median_filter=True, border=0, convolve_data1=0.0, convolve_data2=0.0, chunk_size=10, threshold=0.0)[source]

Get the normalisation factor for shifted and projected images. This function only consider the input images given by their data (numpy) arrays.

Input

array1: 2d np.array array2: 2d np.array

Input arrays. Should be the same size

median_filter: bool

If True, will median filter

convolve_muse: float [0]

Will convolve the image with index nima with a gaussian with that sigma. 0 means no convolution

convolve_reference: float [0]

Will convolve the reference image with a gaussian with that sigma. 0 means no convolution

border: int

Number of pixels to crop

threshold: float [None]

Threshold for the input image flux to consider

returns:
  • data (2d array)

  • refdata (2d array) – The 2 arrays (input, reference) after processing

  • polypar (the result of an ODR regression)

pymusepipe.util_pipe.get_polynorm(array1, array2, chunk_size=15, threshold1=0.0, threshold2=0, percentiles=(0.0, 100.0), sigclip=0)[source]

Find the normalisation factor between two arrays.

Including the background and slope. This uses the function regress_odr which is included in align_pipe.py and itself makes use of ODR in scipy.odr.ODR.

Parameters:
  • array1 (2D np.array) –

  • array2 (2D np.array) – 2 arrays (2D) of identical shapes

  • chunk_size (int) – Default value = 15

  • threshold1 (float) – Lower threshold for array1 (Default value = 0.)

  • threshold2 (float) – Lower threshold for array2 (Default value = 0)

  • percentiles (list of 2 floats) – Percentiles (Default value = [0., 100.])

  • sigclip (float) – Sigma clipping factor (Default value = 0)

Returns:

result – Result of the regression (ODR)

Return type:

python structure

pymusepipe.util_pipe.get_tpl_nexpo(filename)[source]

Get the tpl and nexpo from a filename assuming it is at the end of the filename

Input

filename: str

Input filename

returns:

tpl, nexpo

rtype:

str, int

pymusepipe.util_pipe.lower_allbutfirst_letter(mystring)[source]

Lowercase all letters except the first one

pymusepipe.util_pipe.lower_rep(text)[source]

Lower the text and return it after removing all underscores

Parameters:

text (str) – text to treat

Returns:

updated text (with removed underscores and lower-cased)

pymusepipe.util_pipe.merge_dict(dict1, dict2)[source]

Merging two dictionaries by appending keys which are duplicated

Input

dict1: dict dict2: dict

returns:

dict1 – merged dictionary

rtype:

dict

pymusepipe.util_pipe.my_linear_model(B, x)[source]

Linear function for the regression.

Parameters:
  • B (1D np.array of 2 floats) – Input 1D polynomial parameters (0=constant, 1=slope)

  • x (np.array) – Array which will be multiplied by the polynomial

Return type:

An array = B[1] * (x + B[0])

pymusepipe.util_pipe.normpath(path)[source]

Normalise the path to get it short

pymusepipe.util_pipe.prepare_image(data, median_filter=True, sigma=0.0, border=0)[source]

Median filter plus convolve the input image

Input

data: 2D np.array

Data to process

median_filter: bool

If True, will median filter

convolve float [0]

Will convolve the data with this gaussian width (sigma) 0 means no convolution

returns:

data

rtype:

2d array

pymusepipe.util_pipe.print_debug(text, **kwargs)[source]

Print debugging information

Input

text: str pipe: musepipe [None]

If provided, will print the text in the logfile

pymusepipe.util_pipe.print_endline(text, **kwargs)[source]
pymusepipe.util_pipe.print_error(text, **kwargs)[source]

Print error information

Input

text: str pipe: musepipe [None]

If provided, will print the text in the logfile

pymusepipe.util_pipe.print_info(text, **kwargs)[source]

Print processing information

Input

text: str pipe: musepipe [None]

If provided, will print the text in the logfile

pymusepipe.util_pipe.print_warning(text, **kwargs)[source]
pymusepipe.util_pipe.reconstruct_filter_images(cubename, filter_list=['white,Johnson_B,Johnson_V,Cousins_R,SDSS_g,SDSS_r,SDSS_i'], filter_fits_file='filter_list.fits')[source]

Reconstruct all images in a list of Filters cubename: str

Name of the cube

filter_list: str

List of filters, e.g., “Cousins_R,Johnson_I” By default, the default_filter_list from pymusepipe.config_pipe

filter_fits_file: str

Name of the fits file containing all the filter characteristics Usually in filter_list.fits (MUSE default)

pymusepipe.util_pipe.regress_odr(x, y, sx, sy, beta0=(0.0, 1.0), percentiles=(0.0, 100.0), sigclip=0.0)[source]

Return an ODR linear regression using scipy.odr.ODR

Parameters:
  • x – numpy.array

  • y – numpy.array Input array with signal

  • sx – numpy.array

  • sy – numpy.array Input array (as x,y) with standard deviations

  • beta0 – list or tuple of 2 floats Initial guess for the constant and slope

  • percentiles – tuple or list of 2 floats Two numbers providing the min and max percentiles

  • sigclip – float sigma factor for sigma clipping. If 0, no sigma clipping is performed

Returns:

result of the ODR analysis

Return type:

result

pymusepipe.util_pipe.rotate_cube_wcs(cube_name, cube_folder='', outwcs_folder=None, rotangle=0.0, **kwargs)[source]

Routine to remove potential Nan around an image and reconstruct an optimal WCS reference image. The rotation angle is provided as a way to optimise the extent of the output image, removing Nan along X and Y at that angle.

Parameters:
  • cube_name (str) – input image name. No default.

  • cube_folder (str) – input image folder [‘’]

  • outwcs_folder (str) – folder where to write the output frame. Default is None which means that it will use the folder of the input image.

  • rotangle (float) – rotation angle in degrees [0]

  • **kwargs – in_suffix (str): in suffix to remove from name [‘prealign’] out_suffix (str): out suffix to add to name [‘rotwcs’] margin_factor (float): factor to extend the image [1.1]

Returns:

pymusepipe.util_pipe.rotate_image_wcs(ima_name, ima_folder='', outwcs_folder=None, rotangle=0.0, **kwargs)[source]

Routine to remove potential Nan around an image and reconstruct an optimal WCS reference image. The rotation angle is provided as a way to optimise the extent of the output image, removing Nan along X and Y at that angle.

Parameters:
  • ima_name (str) – input image name. No default.

  • ima_folder (str) – input image folder [‘’]

  • outwcs_folder (str) – folder where to write the output frame. Default is None which means that it will use the folder of the input image.

  • rotangle (float) – rotation angle in degrees [0]

  • **kwargs – in_suffix (str): in suffix to remove from name [‘prealign’] out_suffix (str): out suffix to add to name [‘rotwcs’] margin_factor (float): factor to extend the image [1.1]

Returns:

pymusepipe.util_pipe.safely_create_folder(path, verbose=True)[source]

Create a folder given by the input path This small function tries to create it and if it fails it checks whether the reason is because it is not a path and then warn the user and then warn the user

pymusepipe.util_pipe.select_spaxels(maskDic, maskName, X, Y)[source]

Selecting spaxels defined by their coordinates using the masks defined by Circle or Rectangle Zones

pymusepipe.version module

Copyright (c) 2016-2019 Eric Emsellem <>

All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

  2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

  3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Module contents

Copyright (C) 2017 ESO/Centre de Recherche Astronomique de Lyon (CRAL) print pymusepipe.__LICENSE__ for the terms of use

This package is a wrapper around the MUSE pipeline commands to reduce muse raw data frames. It includes modules for aligning and convolving the frames. It also has some basic routines wrapped around mpdaf, the excellent python package built around the MUSE PIXTABLES and reduced data.