The standard procedure to remove telluric absorption from a NIR spectrum is use an spectrum of a star observed with similar airmass to model of this features. Typically an A0V star is used for this purpose. However several H lines are present in the NIR spectrum. In order to get an accurate telluric subtraction these H lines need to be modeled and removed from the spectrum of the telluric standard.
This can be done either modeling the profile of the H lines using a Lorentzian (or Gaussian) profile or using the spectrum of Vega, smoothed for the instrumental resolution of the instrument, as a template for the H lines.
Which one do you prefer/use? Please share if you know a better/different way to do it…
This topic was modified 3 years, 1 month ago by achene.
I’m not sure exactly what functional form is used to model the H lines, but I believe that it is a Lorentz profile convolved with a Gaussian and a rotation Kernel (if you enter a value for v sin i) – you typically use the deepest H line to build a model for the kernel, and then you can adjust the relative strength of each line as a subsequent step to make sure that they fit your observed telluric well.
The only downside to using xtellcor_general.pro is that you need to have IDL and to feed it spectra that are formatted in a special way – the header must contain several key entries. If you would like to try it, you will need to open the routine writespec_ps.pro (also part of the spextool package) and see how they output the data.
If you are using Flamingos II data specifically, I will soon be posting a complete IDL pipeline to reduce F II data on this forum, and it will makes use of xtellcor_general.pro, so you will see exactly how the data needs to be formatted.