Version 2 December 1995 HST Data Handbook Space Telescope Science Institute 3700 San Martin Drive Baltimore, Maryland 21218 Operated by the Association of Universities for Research in Astronomy, Inc., for the National Aeronautics and Space Administration Version 1.0, February 1994, Edited by Stefi Baum Version 2.0, December 1995, Edited by Claus Leitherer Contributors Archive Group Tim Kimball Science Support Division * FGS: Sherie Holfeltz * FOC: Warren Hack, Robert Jedrzejewski, Antonella Nota * FOS: Ralph Bohlin, Jennifer Christensen, Michael Dahlem, Jonathan Eisenhamer, Ian Evans, Jeffrey Hayes, Tony Keyes, Anuradha Koratkar, Don Lindler, Stephan Martin, Alex Storrs * GHRS: Anne Gonnella, Stephen Hulbert, Claus Leitherer, Al Schultz, Lisa E. Sherbert, David Soderblom * WF/PC-1 and WFPC2: Sylvia Baggett, Andy Fruchter, John Biretta, Stefano Casertano, Shireen Gonzaga, Inge Heyer, Matt McMaster, Keith Noll, Christine Ritchie, Massimo Stiavelli, Brad Whitmore, Mike Wiggs * STSDAS: Dick Shaw * SSD: Daniel Golombek, Krista Rudloff, Mark Stevens Presto Christopher O'Dea Science and Engineering Systems Division Olivia Lupie University of Wisconsin Robert Bless This document was prepared by the Space Telescope Science Institute under U.S. Government contract NAS5-26555. Users shall not, without prior written permission of the U.S. Government, establish a claim to statutory copyright. The Government and others acting on its behalf, shall have a royalty-free, nonexclusive, irrevocable, worldwide license for Government purposes to publish, distribute, translate, copy, and exhibit such material. Send comments or corrections to: Claus Leitherer, SSD Space Telescope Science Institute 3700 San Martin Drive Baltimore, Maryland 21218 E-mail: leitherer@stsci.edu M3.9 Table of Contents Preface:How to Use This Handbook Purpose of this Handbook What's New? Finding Information Typographic Conventions Visual Cues Keystrokes Comments Part 1: For All Observers Chapter 1: Getting Help and Information Getting Help Accessing STEIS World Wide Web Structure Gopher Anonymous FTP Listserver Additional Documentation Chapter 2: Data Analysis with IRAF and STSDAS IRAF Primer First Time Setting Up IRAF Starting and Stopping an IRAF Session IRAF Concepts Loading Packages Tasks and Commands Setting Parameters Setting Environment Variables Working with Files On-Line Help Troubleshooting STSDAS Tables Other Topics Displaying HST Images Image Display with the IRAF display Task Working with Image Sections and Groups Mosaic WF/PC-1 Images Analyzing HST Images RA and Dec Overview of Positional Information Using rimcursor Using metric for WF/PC-1 and WFPC2 Images Improving Your Astrometric Accuracy Photometry Plotting and Manipulating Image Data Displaying HST Spectra Producing Hardcopy Using fwplot grspec splot Analyzing HST Spectra Coadding Spectra Addition With Wavelength Alignment Combining Wavelength and Flux Information Mkmultispec Tables Using resample Spectral Analysis and Manipulation splot STSDAS fitting Package specfit HSP Specific Tasks Getting IRAF and STSDAS Retrieving the IRAF and STSDAS Software Synphot Dataset References Available from STSci Available from NOAO Other References Cited in This Chapter Chapter 3: Data Tapes and File Structures Tape Log and Contents Tape Log Trailer Files FITS-Format Data Files Printouts of Files PDQ Files OCX Files Reading HST Data Tapes Loading Packages Mounting the Tape Using strfits Data Files Header and Data Files Rootnames and Datasets File Extensions Group Data Chapter 4: Getting Data From the Archive Getting Started Using the HST Archives Accessing the Archive Hosts StarView Tutorial Welcome Screen Command Usage and Screen Interaction Searching the Catalog Retrieving Datasets From the Archive 97 Exiting StarView Getting Your Data Tutorial Retrieving Calibration Reference Files Retrieving a File By Name Chapter 5: Observation Logs What are Observation Log Files? Contents of Headers, Tables, and Jitter Images How to Access Observation Log Files How to Use Observation Log Files Guiding Mode Guide Star Acquisition Failure Moving Targets and Spatial Scans High Jitter Part 2: Faint Object Camera Chapter 6: FOC Instrument Overview Spatial Resolution and PSF Filters Field of View and Formats Sensitivity Observing Modes Other than Imaging Pre-COSTAR Chapter 7: FOC Planned vs. Executed Observations File Extensions Header Files Parameters of FOC Observations Determining the Observational Parameters Displaying FOC Images Image Display with the IRAF display Task Modifying the Display What to Expect Commonly Observed Features Chapter 8: Calibrating FOC Data FOC Pipeline Processing FOC Calibration Switches Dezooming of Zoomed Images (PIXCORR) Absolute Sensitivity Correction (WAVCORR) Geometric Correction (GEOCORR) Flatfield Correction (UNICORR) Limitations of the Calibration Process Nonlinearity Correction Geometric Correction Flatfield Residuals Format-Dependent Sensitivity Background Filter Induced Image Shifts Point Spread Function Chapter 9: FOC Error Sources Errors in Absolute Photometry (fl96) Absolute Sensitivity of the fl48 Detector Photometric and Astrometric Accuracies Astrometric Accuracy of Calibrated FOC Data Photometric Accuracy of Calibrated FOC Data Systematic Offsets in Photometric Scale Accuracy of Flatfielding Chapter 10: Recalibrating FOC Data Should You Recalibrate? Absolute Sensitivity Keywords Flatfields Geometric Correction Files Improved Pipeline Algorithms User Calibrations How to Recalibrate Chapter 11: Specific FOC Calibration Issues Polarimetry Objective Prisms: Dispersion and Wavelength Scale f/48 Long Slit Spectroscopy More Information on FOS Part 3: Faint Object Spectrograph Chapter 12: FOS Instrument Overview Instrument Basics Observing Modes Target Acquisition Science Data Acquisition Spectrophotometry Mode Time-Resolved Spectrophotometry Mode Rapid Readout Mode Spectropolarimetry Mode Chapter 13: FOS Planned vs. Executed Observations Contents of Data Tapes Headers and Keywords Binary A cquisition-A CQIBIN Peak-up Acquisition-ACQIPEAK Science Observations Engineering Data Chapter 14: Calibrating FOS Data Pipeline Calibration Overview Input Files Standard Header Packet Unique Data Log Science Data Files Science Header Line Science Trailer Line Data Quality Files Reference Files Reference Tables Details of the FOS Pipeline Process Reading the Raw Data Calculating Statistical Errors (ERR-CORR) Data Quality Initialization Conversion to Count Rates (CNT-CORR) GIM Correction (OFF-CORR) Paired Pulse Correction (PPC-CORR) Background Subtraction (BAC-CORR) Scattered Light Correction (SCT_CORR) Flatfield Correction (FLT - CORR) Sky Subtraction (SKY - CORR) Computing the Wavelength Scale (WAV-CORR) Aperture Throughput Correction (APR_CORR) Absolute Flux Calibration Time Correction (TIM_CORR) Special Mode Processing (MOD-CORR) Scattered Light Correction Post-Calibration Output Files Polarimetric Calibration Chapter 15: FOS Error Sources Photometric Inaccuracies Time-Dependent Variations in FOS Sensitivity Target Miscentering Flatfield Correction Change in Telescope Focus Location of Spectra (Y-bases) Thermal Breathing Jitter Geomagnetically Induced Image Motion Absolute Photometric Calibration System Offsets Wavelength Calibration Errors Filter-Grating Wheel Non-Repeatability Aperture Wheel Non-Repeatability Residual Uncertainty of Magnetic Field after GIM Correction Target Miscentering Other Data Problems Effect of Incorrect Dead Diode Reference File Effect of Noisy Diode Effect of Incorrect Flatfield Reference File Under-Subtraction of Background Light Scattered Light Chapter 16: Recalibrating FOS Data Finding Reference Files and Calibration Information Recalibrating FOS Data Accuracies Wavelength Accuracy Photometric Accuracy Polarimetric Accuracy Chapter 17: Specific FOS Calibration Issues Effects of COSTAR on FOS Data Aperture Dilution Correction for Extended Sources RAPID Mode Observation Timing Uncertainties Part 4: Fine Guidance Sensors Chapter 18: FGS Instrument Overview Detectors Optical Train Target Acquisition Science Data Acquisition Chapter 19: FGS Planned vs. Executed Observations Planned Observations Executed Observations Data Products Data Quality Actual Observing Parameters: Header Keywords Displaying Data Typical FGS Data Chapter 20: Calibrating FGS Data The Calibration Pipeline Unpack Reducing and Calibrating POS Mode Data Pos Make_plate Plate_soln Reducing and Calibrating TRANS Mode data Scurve Merge Binary 310 Some Common Problems in Pipeline Processing Unpacking Data TRANS Mode POS Mode Error Sources Recalibration of Data Specific FGS Calibration Problems Effects of the 1994 Servicing Mission and COSTAR Additional FGS Information Papers and Articles General References Part 5: Goddard High Resolution Spectrograph Chapter 21: GHRS Instrument Overview Dispersers Detectors Internal Calibration Side 1 COSTAR 324 Chapter 22: GHRS Planned vs. Executed Observations Case 1: R136a in the LMC The Exposure Logsheet Examining the ACCUMs Putting FP-SPLITs Back Together Case 2: RAPID Mode (and a little about Spatial Scans) The Exposure Logsheet Examining the RAPID Data 340 Chapter 23: Calibrating GHRS Data Raw Science Data Science Data Return-To-Brightest and Small Science Aperture ACQ/PEAKUP Extracted Data Standard Header Packet (SHP) Unique Data Log (UDL) Trailer File Header Keywords Pipeline Calibration Process Standard Stars Observational Procedures and Data Reduction Relative Fluxes Absolute Fluxes Calibration Steps Data Quality Initialization (DQI-CORR) Conversion to Count Rates (EXP-CORR) Diode Response Correction (DIO-CORR) Paired Pulse Correction (PPC-CORR) Photocathode Mapping (MAP-CORR) Doppler Compensation (DOP-CORR) Photocathode Nonuniformity Removal (PHC-CORR) Vignetting Removal (VIG-CORR) Merging Substep Bins (MER-CORR) Background Removal (MDF_CORR, MNF_CORR, PLY - CORR, BCK - CORR) Determine Wavelengths (ADC_CORR, GWC_CORR) Apply the Incident Angle Correction (IAC_CORR) Echelle Ripple Correction (ECH-CORR) Absolute Flux Conversion (FLX_CORR) Heliocentric Correction (HEL_CORR) Vacuum Correction (VAC_CORR) Chapter 24: GHRS Error Sources Calibration Pipeline Raw Data Qualit Files Calibration Quality Files Chapter 25: Recalibrating GHRS Data How to use calhrs Selecting the "Best" Reference Files Finding Appropriate Reference Files Running the Software Using Wavelength Calibration Exposures Correcting the Zero Point Offset Re-deriving the Dispersion Coefficients FP-SPLITs Flux How the Flux Scale is Calibrated Photometric Correction for Extended Sources 377 Chapter 26: Specific GHRS Calibration Issues Target Acquisition Problems Carousel Properties Timing of GHRS Observations When did my Observation Start? When did my Observations End? How Long did my Observation Last? What is the Exposure Time Per Pixel? Was my Observation Interrupted? SPYBAL Calibration Anomalies Geomagnetically Induced Motion Dead or Noisy Diodes Low Count Rate Vignetting Blemishes Doppler Compensation GHRS Point Spread Function (PSF) GHRS Line Spread Function (LSF) High Signal-to-Noise Observations Part 6: High Speed Photometer Chapter 27: HSP Instrument Overview Chapter 28: Calibrating HSP Data Data Products and File Structures Standard Header Packet Unique Data Log Science Data Files Quality Mask Files Calibrated Data Files HSP Data Products HSP Keywords Displaying HSP Data Displaying the SHP and UDL Displaying HSP Area Scans Pipeline Calibration Calibration Switches Calibration Algorithms Correcting for Dead Time Computing the True Count Rate Computing the True Photocurrents Calculating Sample Time Calibration Parameter Polynomial HSP Calibration Parameter Tables Chapter 29: HSP Error Sources Uncertainty in EXPSTART and EXPEND Disagreement Between PTSRCFLG and SHP Correcting Times in PRISM and STAR-SKY Modes Prism Aperture Calibration VIS Degradation 407 Orbital Period and Ramp SDF Clock Errors Bogus Data Packets Data Echo Problem Expected Accuracies of HSP Data Further Analysis References Part 7: Wide Field/Planetary Camera-1 Chapter 30: WF/PC-1 Instrument Overview Data Acquisition Chapter 31: WF/PC-1 Planned vs. Executed Observations Header Keywords Chapter 32: Calibrating WF/PC-1 Data Overview of Pipeline Processing Calibration of WFIPC-1 Data Calibration Details Static Mask Saturated Pixels A/D Fixup Bias Level Bias File Preflash/CTE File Dark File Flatfield Photometry Keywords Histograms Data Quality Files Calibration Files Chapter 33: WF/PC-1 Error Sources Effect of Decontaminations Scattered Light Persistent Measles Hot Pixels Cosmic Rays Flatfield Anomalies Photometry Plate Distortion Data Accuracies and Problem Solving Chapter 34: Recalibrating WF/PC-1 Data Should You Recalibrate? Determining the "Best" Reference Files When is Best Not the Best? Available Flats Delta Flats Recalibrating WFIPC-1 Data Calculating Absolute Sensitivity for WF/PC-1 462 Chapter 35: WF/PC-1 Calibration Issues PSFs Observed PSF Library PSF Limitations and Effect of Jitter 466 Part 8: Wide Field Planetary Camera Chapter 36: WFPC2 Instrument Overview Chapter 37: WFPC2 Planned vs. Executed Observations Data Files and Extensions Header Keywords Correlating Phase II Exposures with Data Files Chapter 38: Calibrating WFPC2 Data Overview of Pipeline Calibration WFPC2 Calibration Process Calibration Files Calibration Steps Chapter 39: WFPC2 Error Sources Bias Subtraction Error Flatfield Errors Dark Current Subtraction Errors Chapter 40: Recalibrating WFPC2 Data The Standard Pipeline Assembling the Calibration Files Setting Calibration Switches Calibration Beyond the Pipeline Superdarks and Hot Pixel Removal Calibrating Polarization Data 505 Chapter 41: Specific WFPC2 Calibration Issues The Zeropoint Photometric Corrections Contamination (Time Dependent) April 23,1994 Cool Down (Time-Dependent) PSF Variations (Time Dependent) Charge Transfer Efficiency (Position Dependent) Geometric Distortion (Position Dependent) Gain Variance (Position Dependent) Pixel Centering (Position Dependent) Miscellaneous Photometric Corrections Aperture Correction Color Terms Digitization Noise Red Leaks Charge Traps Exposure Times: Serial Clocks An Example of Photometry with WFPC2 Further WFPC2 Reduction Issues Cosmic Rays Charge Traps Dithering Reconstruction and Deconvolution WFPC2 Image Anomalies Bias Jumps Residual Images PC1 Stray Light Other Anomalies References 530 Glossary Index ------------------------------------------------------------------------------ How to Use This Handbook In This Preface... Purpose of this Handbook What's New? Finding Information Typographic Conventions This HST Data Handbook gives you the information you need to work with the data from a Hubble Space Telescope (HST) observation from any instrument. In this introduction, we explain how to make the best possible use of this handbook. Purpose of this Handbook The HST Data Handbook is a guide intended to help you maximize the scientific return of data obtained with HST. We describe how data are sent to observers, how to read the data and recognize the file structures, how to display data, and-most importantly-how to understand the data and to look for common (and not so common) anomalies in the data. This handbook assumes no prior knowledge of HST data, the IRAF and STSDAS software, or any specific knowledge of HST instruments, other than that required to have prepared a successful observing proposal. A typical HST user should find this document sufficient to answer most of the questions arising during data analysis. If you have any questions that are not addressed in this handbook, the STScI Help Desk (help@stsci.edu) is available to serve you. The Help Desk staff will either answer your questions directly or will refer them to the appropriate STScI expert. This new version of the HST Data Handbook contains substantially more information than version 1.O. The previous version was written at about the time of the first servicing mission in December 1993. Therefore the Wide Field Planetary Camera-2, the new instrument, was only briefly covered. In the meantime WFPC2 has produced a wealth of new data, and WF/PC-1 data are used only by archival researchers. The properties of the Faint Object Camera, the Faint Object Spectrograph, and the Goddard High Resolution Spectrograph are significantly changed by COSTAR so that pre- and post-COSTAR data are quite different. The revised version of the HST Data Handbook takes into account the new instrument properties. The WF/PC-1 and VaTC2 and their data products are discussed separately. FOC, FOS, and GHRS data obtained before and after the servicing mission are still discussed at the same time for each instrument but differences between pre- and post-COSTAR data are highlighted. It is assumed that data are post-COSTAR by default but a discussion of pre-COSTAR data is given for the benefit of archival researchers. The Fine Guidance Sensors experienced only a minor change due to COSTAR. Therefore analysis of data obtained before and after December 1993 will not be very different. Since the High Speed Photometer was replaced by COSTAR, only pre-COSTAR HSP data exist. The current section on HSP data analysis is only slightly revised. In designing the structure of the HST Data Handbook we arranged the chapters to reflect the order in which most users would approach their data reduction. We start with a general overview and some basics of data structure and reduction which should be beneficial for most users. Archival research is becoming increasingly important and the analysis of archival data is not substantially different from the analysis of new data. Therefore the chapter describing the HST Archive is no longer an appendix but has become an integral part of the handbook. Version 2.0 contains a new chapter describing the Observatory Monitoring System (OMS), which provides easy access to pointing information for observations. This capability did not exist when Version 1.0 of the handbook was written. Finding Information This handbook is divided into a series of chapters, each covering one specific aspect of HST data. Much of the information provided in this handbook is independent of the STSDAS software and will be of use even to those not reducing and analyzing their data with STSDAS. However, we have described, at each step, how to you use tasks in IRAF and STSDAS to work with your HST data. Chapters 1 through 5 are not instrument specific: they should be read by everyone before proceeding to subsequent chapters, which are instrument-specific. Most users will analyze only data obtained with one instrument at a time, and the structure of this handbook follows this approach. Chapter 1 is an overview of the available resources for obtaining documentation and getting help. Everybody using HST data should read this chapter before analyzing the data. We describe how to access the electronic information system at the Space Telescope Science Institute, including the World Wide Web server. Chapter 2 is a brief introduction to IRAF and STSDAS. It is far from exhaustive, and more complete discussions are provided in the IRAF and STSDAS manuals. However, this chapter should provide enough information for a novice IRAF user to start using the system for basic data manipulations. Chapter 3 is for the observer who has received an HST data tape and needs to load the data to a local disk. File structures and naming conventions are explained. This chapter pertains to all instruments. An introduction to the HST Data Archive is provided in Chapter 4. For most purposes, data from the Archive are not different from data sent to observers on tapes. Throughout this handbook, new and archival data are treated identically. In Chapter 5, we describe the data produced by the Observatory Monitoring System (OMS), a software system reporting instrument and observatory status during an observation. This information is useful, for example, when determining the spacecraft jitter. Following the generic chapters is a sequence of parts, each covering one science instrument. Each of these parts is divided into several chapters. The instrument chapters prepare you to work with, and understand the limitations of, the data for a particular HST instrument. The instrument parts include the following topics: * An instrument overview describing the hardware configuration. * A chapter describing how to compare planned observations with the data returned from the executed observation. * Calibrating data. * Potential error sources. * Recalibrating data. * Specific calibration concerns. A short glossary defining HST-specific terms, as well as acronyms and abbreviations used in this manual, is provided as an appendix to this document. Typographic Conventions To help you understand the material in the HST Data Handbook, we will use a few consistent typographic conventions. Visual Cues The following typographic cues are used: * bold words identify a STSDAS or IRAF task or package name. * typewriter-like words identify a variable, file name, system com- mand, or response that is typed or displayed as shown. * italic type indicates a new term or an important point. * ALL CAPS identifies a header keyword or a table column. Keystrokes Keystroke commands and sequences are identified by the following formats. * Shift-Q - When two keys are linked by a dash, both keys should be pressed at the same time. * Esc D - When a space separates two keys, a sequence is indicated. Press one key, release it, then press the other. * M - Press only the key. If we meant that you should press shift with the key, we would say so, such as in the first example, above. Comments We include three types of information that are called out, each identified by an icon in the left margin. Tip: No problems..just another way to do something or something that might make your life easier. Heads Up: This is something that is often done incorrectly or that is not obvious. Warning: You could corrupt data or produce incorrect results. ------------------------------------------------------------------------------ The chapters in this part of the handbook provide general information about HST data sets and the tools used to work with them. Topics in this part include: * How to get help and find information. * How to use IRAF and STSDAS software to work with the datasets. * What files constitute a dataset. * How to use the HST Data Archive to find and retrieve files. * How to understand and use the Observation Log files. CHAPTER 1 Getting Help and Information In This Chapter... Getting Help Accessing STEIS Additional Documentation In this chapter, we describe the services that STScI provides and how to find and obtain information about the Hubble Space Telescope (HST), its instruments, and observations and their analysis. We also list additional documentation that are available from the Help Desk at STScI. Additional information about virtually any HST-related topic can be found on the Space Telescope Electronic Information System (STEIS). Getting Help User support services are available from either the Space Telescope Science Institute (STScI) or the Space Telescope European Coordinating Facility (ST-ECF). European users should generally contact the ST-ECF staff for help; all other users should contact STScI. If a contact scientist has been assigned to your program, you should address questions to that contact scientist. Otherwise, contact the general Help Desk at STScI; European users can get general help through ST-ECF in Garching. To contact the Help Desk at STScI: Send e-mail: help@stsci.edu Phone:1-410-338-1082 For questions related to the HST Archive: * Sende-mail: archive@stsci.edu To contact the Help Desk at ST-ECF: * Sende-mail: stdesk@eso.org The Help Desk staff at STScI quickly provides answers to any question about HST-related topics. The Help Desk staff has access to all of the experts and resources available at the Institute, and they maintain a database of questions and answers so that frequently asked questions can be immediately answered. The Help Desk staff also provide STScI documentation, in either hardcopy or electronic form, including all user manuals, Instrument Science Reports, Instrument Handbooks, and more). Questions sent to the Help Desk during normal business hours will normally be answered within that day. Questions received outside business hours will be answered the next morning. Usually, the Help Desk staff will reply with the answer to a question, but occasionally they will need more time to investigate the answer. In these cases, they will reply with an estimated time needed to reply with the full answer. If you feel that your needs are not being adequately addressed through the Help Desk, please contact the Science Support Division Head, Knox Long, by e-mail at long@stsci.edu or by phone (410) 338-4862. Accessing STEIS The Space Telescope Electronic Information Service (STEIS) is a collection of electronically available files containing a wide variety of information about HST for professional astronomers and the public at large. Essentially all new documentation created by the Institute is posted to STEIS. Resources on STEIS includes: * Information on submitting a proposal for HST observing time. * Documentation, including instrument handbooks and guides for reducing scientific data. * News from HST, including recent observations and telescope performance. * Status of scientific instruments. Data reduction software. * Information about the HST Data Archive. * The HST observing schedule. * Calibration information. * Educational material, including MPEG movies and images of selected HST data (in several popular formats, including TIFF, GIF, and JPEG). This section describes how to use various access methods to find and retrieve information from STEIS. Most users are now using World Wide Web, which is described in more detail than other methods (such as ET or Gopher). World Wide Web The World Wide Web (WWW) is a hypertext-oriented way of navigating through the Internet and finding information. The project was started by and is maintained by CERN (the European Laboratory for Particle Physics). Hypertext is text that contains links to other documents. Access to WWW is handled by running client software (a browser) on your local host. A popular browser is the National Center for Supercomputer Applications (NCSA) Mosaic browser, which will be shown in the examples here. Another popular browser is Netscape. The browser allows you to read hypertext documents that are made available at individual internet sites (such as STEIS) and to navigate from document to document by fetching the selected hypertext documents. The location of each document is described by an address called (in WWW jargon) the uniform resource locator (URL). The home page for STEIS (Figure 1.1 shows the STEIS home page), is fetched by the following URL: http://www.stsci.edu/top.html You could also give a URL when you first start Mosaic, for example: Mosaic http://www.stsci.edu/top.html After starting Mosaic, you can specify a URL by clicking on the "Open" button and typing the URL in the pop-up window. The highlighted text is connected or linked to other documents, images, or files. To navigate in hyperspace just click on the highlighted text with the left mouse button and you will be connected to that document. You can save a document to disk by clicking the "Save As" button and a pop-up window will appear. You can specify a directory and file name for the file and choose a format, i.e., ASCII, PostScript, or Hypertext Markup Language (HTML). (In Netscape, you can do this by holding down the right mouse button over a link.) Similar to Gopher's bookmarks, it is possible to save in a hotlist the locations of documents that you may wish to revisit later. Click on the "Navigate" button at the top of the screen and in the pull-down menu click on "Add current to hotlist". Clicking on "hotlist" in the same pull down window allows you to select and fetch the documents. In Netscape, the equivalent feature is called "bookmarks" and is available from the menu bar. 1. MPEG is the Motion Picture Encoding Group format. TIFF is Tagged Image File Format. GIF is Graphic Interchange Format. JPEG is Joint Picture Encoding Group format. Each of these are widely-understood graphic file formats. Figure 1.1: STEIS Home Page World Wide Web is the recommended method for obtaining electronic information from STEIS. As you can see in Figure 1.1, the home page provides links to seven major areas: * STSCI: Information about the Space Telescope Science Institute, its staff, library, and meetings, and about the Publications of the Astronomical Society of the Pacific (PASP). * Public: Contains links to information of general interest to educators, Students, the media, and the general public. These include pictures (in GIF, TIFF, and JPEG formats) obtains from HST observations, movies and animation, and the text from press releases. It also includes HyperCard books for the Macintosh. * Proposer: Information and tools for astronomers proposing HST observations. Resources include the call for proposals, templates and instructions, and documentation. * Instruments: Information on the scientific instruments aboard HST. Includes links to handbooks, instrument status, calibration information, and Instrument Science Reports. * Observer: Information and resources for HST observers, including Phase II proposal and budget templates and software, as well as the weekly HST observation schedule. * Software: Contains links to the STSDAS software, Digitized Sky Survey CD-ROM information, hcompress image compression software, and Telescope Image Modeling software. * Archive: Information and tools for searching and using the Archive of past HST observations. Structure Table 1.1 shows the major directories on STEIS^2 and describes their contents. Those directories in the white bars contain files and subdirectories mentioned in this document. The shaded rows contain information that is unlikely to be needed to reduce or analyze HST datasets. This structure can be expected to change over time. Some of the resources available on STEIS that will be of interest to observers analyzing HST data include: * Calibration reference files in the cdbs/ directory. * Instrument Handbooks in the documents/ directory. * The HST Archive Manual in the documents/ directory. * Gopher access (via telnet) to the HST Archive host computers through the hst-archive/ directory. * STEIS memos mentioned in Book II and other instrument information are in the instrument_news/ directory. * STSDAS software and documentation can be retrieved from the software/ directory. * TIM and TinyTIM software are in the software/ directory. Table 1.1: STEIS Directory Structure (Shaded blocks are not critical)) Directory Description Major Files and Subdirectories ------------------------------------------------------------------------------ cdbs Files used in recalibrating calobs/ calspec/ comp/ crwave/ HST data. fields/ grid/ uref/ utab/ vtab/ wref/ wtab/ yref/ ytab/ zref/ ztab/ documents ASCII and PostScript versions archive-manual/ of documents. baltimore-charter/ calibration/ fgs-handbook/ foc-handbook/ fos-handbook/ gasp-cookbook/ ghrs-handbook/ hsp-handbook/ image-restoration/ phase2-instructions/ wfpc2-handbook/ network-resources/ stsdas-docs/ ExInEx/ Hypercard images from the Special Studies Office. hst-archive/ Files and tools of use to archival researchers. If using WWW or Gopher, it is possible to access the HST Data Archive from within this directory. instrument-news/ Abstracts of most recent fgs/ foc/ ghrs/ hsp/ instrument reports written by observatory/ ota/ scs/ smo/ instrument scientists, wfpc/ wfpc2/ describing reduction techniques, etc. meetings/ Information about upcoming meetings at STScI. observer/ Information of interest to /catalogs/ weekly_timelines/ observers, including catalogs and schedules of upcoming observations. pasp/ Information from Publications of the Astronomical Society of the Pacific. policy/ Copies of letters to PIs membership on STsCI committees, STScI long-range plans. proposer/ Information and resources to cycle6/ software/ propose HST observations software/ Software for use with HST data, cdrom/ casb/ hcompress/ rps2/ including STSDAS, TIM, and stsdas/ tim/ image compression software. TIM, and image compression software. stsci/ Information for and about epa/ grant_office/ hst_news/ STScI, includes images in epa human_resources/ library/ directory, information about newsletters/ steis/ grants, and other Institute news. ------------------------------------------------------------------------------ Each instrument team maintains informational listings about currently available reference files on STEIS. These listings often contain information about the nature and quality of the reference files. You may wish to consult these listings before you decide to recalibrate. Each of the instrument chapters will describe necessary reference files and provide information about locating and retrieving the files. Gopher You can also access STEIS through Gopher if the software is installed at your home site. Gopher is a user-friendly interface developed at the University of Minnesota; it allows you to browse through files before retrieving them and allows you to search WAIS indexes. If you have Gopher installed, you can access STEIS by typing one of the following commands. Terminal-Based Gopher: gopher www.stsci.edu X-Windows Gopher: xgopher www.stsci.edu Figure 1.1: Xgopher Anonymous FTP You can log in to STEIS by using FTP to open a connection to ftp.stsci.edu and then log in with username "anonymous" and your e-mail address as a password. Use the cd command to change directory (e.g., cd /instrument_news), ls to get a listing of the contents of a directory, and get filename to transfer a single file to your home computer via FTP. Use mget (e.g., mget template*) to get a series of files. Images and tables should be transferred in binary mode. Type binary at the ftp prompt before getting binary files. If you find files with a .Z extension, the file is compressed. If you leave off the .Z extension when typing your get command, the server will automatically uncompress the file for you. Most images on STEIS are Unix format GEIS files and most tables are Unix format STSDAS tables. If you wish to use these files on a VMS computer you will need to convert the images and tables to VMS format using sun2vax and tconvert (both tools within STSDAS), respectively. Listserver Users can subscribe to certain popular files, which will be e-mailed automatically whenever they're updated. It is also possible to request items from the archive of previous files. The list of available files will expand with time to include other frequently accessed ASCII files. Please send any requests for additional files you would like to see made available through listserver to hst_query@stsci.edu. Files currently available for subscription are listed in Table 1.1. Table 1.1: Files Available Through Listserver File Name Description timeline Detailed weekly schedule of observations hst-status Daily activity and instrument status reports wfpc2_cal Calibration information for WFPC2 foc_news Information about the Faint Object Camera fos_news Information about the Faint Object Spectrograph ghrs_news Information about the Goddard High Resolution Spectrograph wfpc_news Information about both the WF/PC-1 and the WFPC2 To subscribe to a listserver file, send a message with a blank subject line to listserv@stsci.edu and the following text: subscribe file name Your Name where "file name" is one of the above file names and "Your Name" represents the user's full name. Your return e-mail address is copied from this message and added to the list of subscribers. To unsubscribe from a list, just replace the word subscribe" with "unsubscribe" in the above message. For a quick overview of commands, send a message with a blank subject line to the same e-mail address but with the word "HELP" as the message. Additional Documentation STScI maintains a complete list of available documentation. Copies of this list can be obtained by sending an e-mail request to help@stsci.edu. Some of the documentation that expands on information provided in the HST Data Handbook include: * STSDAS Users Guide, version 1.3, February 1994. * HSTArchive Primer, version 5.1, May 1995. * HST Archive Users Manual, version 5.1, February 1995. * FOC Instrument Handbook, version 6.0, June 1995. * FOS Instrument Handbook, version 6.0, June 1995. * FGS Instrument Handbook, version 5.0, June 1995. * GHRS Instrument Handbook, version 6.0, June 1995. * WF/PC-1 Instrument Handbook, version 5.0, April 1994. * WFPC2 Instrument Handbook, version 6.0, June 1995. * Synphot User's Guide, version 1.3.3, March 1995. * Calibration of the Hubble Space Telescope: Proceedings from a Workshop Held at STScI November 1993, J. Chris Blades and Samantha Osmer (eds.). * Various Instrument Science Reports describing technical and calibration issues. Chapter 2 Data Analysis with IRAF and STSDAS In This Chapter... IRAF Primer Displaying HST Images Analyzing HST Images Displaying HST Spectra Analyzing HST Spectra Getting IRAF and STSDAS References The Space Telescope Science Data Analysis System (STSDAS) is the software system for calibrating and analyzing data from the Hubble Space Telescope. The package is large and powerful, containing programs--called tasks--to perform a wide range of functions supporting the entire data analysis process, from reading your tapes, through recalibration and analysis, to producing your final plots and images. STSDAS is built on top of the Image Reduction and Analysis Facility (IRAF) software developed at the National Optical Astronomy Observatory (NOAO). This means that any task in IRAF can be used in STSDAS and that the software will be portable across a number of platforms and operating systems. To effectively exploit the power of STSDAS, you will need to learn the basics of IRAF. IRAF Primer This section provides just enough information to help you start effectively using IRAF and STSDAS. These concepts are general, and apply to using tasks in IRAF, STSDAS, TABLES, or any IRAF layered package. As your experience with the software increases, you will need to reference more complete information, which can be found in the STSDAS User's Guide and in the documentation available from NOAO, especially A Beginner's Guide to Using IRAE Some of the concepts described in this chapter include: * How to set up IRAF the first time you use the software. * How to start and stop an IRAF session. * Basic concepts, such as loading packages, setting parameters, etc. * How to use the on-line help facility. * File formats used in IRAF and STSDAS, such as Generic Edited Information Set (GEIS) image format and STSDAS table format. First Time... This section is for new IRAF users. It explains: * How to set up your working IRAF environment (below). * How to start the IRAF program and how to quit when you're done. We assume that your site has IRAF and STSDAS installed. If this is not the case, you must obtain and install the software. See "Getting IRAF and STSDAS" on page 70 for details. Setting Up IRAF Before running IRAF for the first time, you must: 1. Define your root IRAF directory. 2. Define environment variables or system logicals and symbols. 3. Use mkiraf. First, you must decide which directory you will use as your root IRAF directory (also referred to as your IRAF home directory). Users generally name their IRAF home directory iraf and set it up in their account's root directory (i.e., the default directory that you are in when you log in to the system). The IRAF home directory doesn't need to be in your account's root directory, nor does it need to be called iraf; it can be anywhere and be named anything, but you should not put it on a scratch disk that is periodically erased. After you define your home directory, switch to that directory so that you can define the needed environment variables or system logicals and symbols. Assuming that you call your root IRAF directory "IRAF", this is done as follows: For VMS: (This can be placed in LOGIN.COM) $ CREATE/DIR [.IRAF] $ SET DEFAULT [.IRAF] $ IRAF file $ MKIRAF For Unix: (This can be placed in .login file.) > mkdir iraf > cd iraf > setenv iraf /usr/stsci/iraf > source $iraf/unix/hlib/irafuser.csh > mkiraf (The directory name /usr/stsci/iraf is site dependent, check with your sytem staff.) The mkiraf command initializes IRAF. The command works the same under either VMS or Unix. The mkiraf command does two things for you: * Creates and initializes your login.cl file. * Creates a subdirectory called uparm. After typing the mkiraf command, you will see the following: > mkiraf -- creating a new uparm directory Terminal types: gterm=ttysw+graphics,vt640 ... Enter terminal type: Enter the type of terminal or workstation you will most often use with IRAF. Terminal types and workstation emulators will vary from site to site and you should look at the dev$termcap file for a complete list of terminal types. Generic terminal types that will work for most users are: * vtlOO for most terminals. * xtermjhs for most workstations running under X-Windows. * xgterm for sites that have installed X11 IRAF and IRAF v2.10.3 BETA or later. You can change your terminal type at any time by typing set term=new_type during an IRAF session. You can also change your default type by editing the appropriate line in your login.cl file. 1. Users at STSCI should consult the STSCI Site Guide for IRAF and STSDAS. 16Chapter 2: Data Analysis with [RAF and STSDAS After you enter your terminal type, you will see the following output before getting your regular prompt: A new LOGIN.CL file has been created in the current... You may wish to review and edit this file to change... The login.cl file is the startup file used by the IRAF command language (CL). It is similar to the LOGIN.COM file used by VMS or the .login file used by Unix. Whenever IRAF starts, it looks at the login.cl file. You can edit this file to customize your IRAF environment. In fact, you should look at it to make sure that everything in it is correct. In particular, there is a line starting with set home = that tells IRAF where to find your IRAF home directory. You should verify that this statement does, in fact, point to your IRAF directory. If you will be working with IRAF format images (OIF) you should also insert a line saying set imdir = "HDR$". The imdir setting is ignored when working with GEIS format images. The uparm directory will contain your own copies of IRAF task parameters. This allows you to further customize your IRAF environment by setting certain parameter values as defaults. Once you set up IRAF, you should rarely need to do it again. When you want to use IRAF, simply move to your IRAF directory and type cl. Starting and Stopping an IRAF Session To start an IRAF session: 1. Move to your IRAF home directory. 2. Type cl. IRAF starts by displaying several lines of introductory text and then puts a prompt at the bottom of the screen. Figure 2.1 is a sample IRAF startup screen. Figure 2.1: IRAF Startup Screen To quit an IRAF session: 1. Type logout. IRAF Concepts This section describes basic techniques such as: * Loading packages (below). * Running tasks and commands (page 19). * Viewing and setting parameters (page 19). * Setting and using environment variables (page 22). The typical sequence of doing things in IRAF is: 1. Define any environment variables that you may want to use during your session, such as the location of reference files and tables. 2. Load the packages you want to use. 3. Edit parameter sets for the tasks you want to use. 4. Run the tasks. Each of these steps are described in the following sections, although not in this order because the most fundamental and commonly-used concepts are described first. For example, not every user will need to redefine environment variables, especially during a first session. Loading Packages In IRAF jargon, an application is called a task and logically related tasks are grouped together in a package. Before you can use a task, its package must be loaded. * Load a package by typing its name. * When you load a package, the prompt changes to the first two letters of the package name. * When you load a package, the names of all the newly-available tasks and subpackages are displayed. Figure 2.2: Loading Packages Some commands that will help you manage packages are: * ? - List tasks in the most recently-loaded package. * ? ? - List all tasks loaded, regardless of package. * package - List names of all loaded packages. * bye - Exit the current package. Tasks and Commands This section explains how to run tasks, run background tasks, run system-level commands, and use piping and redirection. Running a Task * Run a task by typing its name, or any unambiguous abbreviation of it. Normally, you would also specify values for any required parameters on the command line when you run the task, however, you don't need to. If you start a task by simply typing its name, the task will prompt you for any required information, such as the names of input files, starting values, etc. The use of parameters is an important concept and will be discussed in the next section. An example of running a task with a single parameter is: st> gstatistics dev$pix IRAF does not require you to type the complete command name--only enough of it to make it unique. For example, dir is sufficient for directory. Background Tasks To run a task as a background job, freeing your workstation window for other work, add an ampersand (&) to the end of the command line. For example: st> taskname & Escaping System-Level Commands To run an operating system-level command (i.e., Unix or VMS commands) from within the IRAF CL, precede the command with an exclamation point This is called escaping the command. For example: st> !system-Command Piping and Redirection You can run the tasks in sequence, with the output of one task being used as the input for another. This is called piping, and is done by separating commands with a vertical bar ( | ). For example: st> commandfilename | task2 You can also redirect output from any task or command to a file by using the greater-than symbol (>). For example: st> command > outputfile Setting Parameters Information is provided to tasks through parameters. Parameters are used to specify names of input or output files, starting or ending values, option settings, and many other types of information that controls the behavior of the task. The two most useful commands for handling parameters are: * lparam to display the current parameter settings. * eparam to edit parameters (page 20). Parameters can be set in three ways: * From the command line. For example, you could type: st> taskparameter="value" * When running the task. For example: st> taskparameter-value * Using epar. Viewing Parameters with lparam The lpar command lists the current parameter settings for a given task (Figure 2.3). Figure 2.3: Displaying Parameter Settings with lpar Setting Parameters with eparam The epar command is an interactive parameter set editor. All of the parameters and their current settings are displayed on the screen and you can move around the screen using the arrow keys (also called cursor keys) and type new settings for any parameters you wish to change. Figure 2.4 shows a sample of the epar editor at work (invoked by typing epar strfits). Figure 2.4: Editing Parameters with epar Parameter Data Types-What to Specify Parameters are either required or hidden, and each parameter expects information of a certain type. Usually, the first parameter is required, and very often it expects a file name. Parameters are described in the online help for each task. When a parameter is shown in parentheses (either in the online help or in the lpar listing), the parameter is optional. If you type the wrong type information for a parameter, epar has some error checking capability and will usually display an error message saying "Parameter Value is Out of Range." The message is displayed when you move to another parameter or if you press Return. Parameter types are listed in Table 2.1. Table 2.1: Parameter Data Types Type Description ------------------------------------------------------------------------------ File Name Full name of the file. Wild card characters (* and ?) are often allowed. Some tasks will allow you to use special features when specifying file names, including "@" lists, IRAF networking syntax, and image section or group syntax. (See "File Name Syntax" below). Integer Whole number. Often the task will specify minimum or maximum values (see the help pages). Real Floating point numbers, can be expressed in exponential notation. Often will have minimum and maximum values. Boolean Logical "yes" or "no" values. String Any characters. Sometimes file names are specified as string. Pset Parameter set. ------------------------------------------------------------------------------ Restoring Parameter Default Values Occasionally, IRAF will get confused by your parameter values. If this happens, you can restore the default parameters with the unlearn command. You can use unlearn on either a task or on an entire package. The unlearn command can be used if you have problem with IRAF or STSDAS parameter conflicts. Setting Environment Variables IRAF uses environment variables to define which devices are used for certain operations. For example, your terminal type, default printer, and the disk and directory used for storing images are all defined through environment variables. Environment variables are set using the set command and are displayed using the show command. Table 2.2 lists some of the environment variables that you might want to change. Table 2.2: Environment Variables ------------------------------------------------------------------------------ Variable Description Example of Setting printer Default printer for text set printer = lp2 terminal Terminal type set term = xterm stdplot Default printer for all graphics set stdplot = ps2 output, such as snap stdimage Default terminal display setting set stdimage = imt800 for image output (most users will want this set to either imt512 or imt8OO) clobber Allow or prevent overwriting files set clobber = yes of files imtype Default image type for output set imtype = "hhh" images. "imh" is old IRAF format, "hhh" is STSDAS GEIS format. ------------------------------------------------------------------------------ You can permanently set your environment variables by editing your login.cl file so that they will automatically be set each time you use IRAF. To do this, you would use vi, Emacs, or whatever your favorite text editor happens to be, to specify each variable on its own line. You can see the name and current value of all environment variables for your session by using the show command with no arguments. Working with Files This section describes: * File structures commonly used in STSDAS and IRAF. * How to use directory specifications. * How to specify file names. * How to write files to tape using stwfits. File Structures IRAF recognizes a number of different file structures. HST data are stored in a specific file format known as GEIS, which is different from the IRAF image format. Both IRAF and STSDAS images consist of two files. The two files are always used together as a pair: * A headerfile, which consists of descriptive information. IRAF header files are identified by an extension of imh. STSDAS header files are in ASCII text format and are identified by an extension of hhh or another extension ending in "h", such as .coh or. q1h. * A binary data file,^2 consisting of pixel information. IRAF data file names end with a .pix extension. STSDAS data files end with an extension of .hhd or another extension that ends with "d", such as .c0d or .q0d. 2. The binary data file format is host-dependent and may require translation before it can be moved to a computer using a different architecture. STSDAS always expects both component files of a GEIS image to be kept together in the same directory. When working with IRAF or STSDAS images, you need only specify the header file name--the tasks will automatically use the binary data file when necessary. IRAF and STSDAS image formats are not similar, other than that they each have two component files. The IRAF header is part binary and part ASCII, and has no fixed format, whereas the STSDAS header is always 80 bytes long and consists entirely of ASCII text. The IRAF binary file consists of a grid of binary data preceded by a 512-byte binary description. The STSDAS binary file consists of a series of groups of binary data, each representing an entire image, and blocks of group description information called the Group Parameter Block. Directory Specification As we mentioned above, the two components of an image are expected to be in the same directory, and will look for these files in whatever directory you are currently located.^3 3. IRAF format images can use separate directories for the header and pixel files. To navigate through directories, you should use the following commands: * path - Lists the current working directory. * cd directory - Move to the named directory. Specifying Files Most tasks in IRAF and STSDAS operate on files and expect you to specify a file name for one or more parameters. There are several types of special syntax that can be used with certain tasks when specifying file names. These syntax features include: * Wild card characters, often called templates: These are used to specify multiple files using pattern matching techniques. The wild cards are: * Matches any number of characters, e.g.: z*.c0h ? Matches any single character, e.g.: z0lx23x.c?h When using wildcards with image-processing tasks, be sure to exclude the binary pixel files by ending your file name specification with an "h", for example: y*.??h * List files, often called @-files: A list file is an ASCII file that contains a list of file names, one per line. If the task supports the list file feature, you would type the name of your list file, preceded by the "@" character. For example: @files.txt * Image section specification. Tasks that work with image data will often let you specify that you want to work on only a small area of the image rather than the entire image. See "Image Display with the IRAF display Task" on page 29. To specify an image section, specify each axis range in square brackets, for example: image.hhh[10:200,20:200] * Group specification. HST images are stored in a format known as GEIS format. This lets one file contain one or more sets of related data, such as images taken from different detectors or data taken at different time steps. Many STSDAS tasks will let you choose which of the groups in a multi-group image will be used. To specify a group, enclose its number in brackets, for example: image.hhh[7l. A description of GEIS format is available on-line within STSDAS by typing help geis. Note that if you use both a group and an image section specification, the group must be specified first, for example, image.hhh[2] [10:200]. * IRAF networking specification. IRAF is capable of reading and writing files to and from remote systems on a network. This is often used with tasks in the fitsio and convfile packages, or with image display tasks. You must do some work to enable this feature, however, and this is described in the STSDAS Users Guide and in the online help, which can be read by typing help networking. To specify that you want to use the IRAF networking feature, you will specify the remote host name followed by an exclamation point (!), followed by the file or device name. For example: ra!mta. Writing Files to Tape The stwfits task (in the STSDAS fitsio package) is used to create FITS tape (or disk) files from multigroup HST images. The task offers several options for different ways of writing the files, but we will focus here on using the task to write files using the standard IEEE FITS format.^4 4. lf you want to know how to use other formats, type help fits_exampl from within IRAF. There are two ways to write HST images to FITS files using the stwfits task: * With the groups stacked in a single FITS file and group parameters stored in a FITS TABLE extension. * With each group written to a separate FITS file. The first method has the advantage of being somewhat more efficient, but is a relatively recent method of writing data, so older versions of some software may not recognize the TABLE extension. Writing each group of a GEIS file out to a separate FITS files may, therefore, give you better portability to other systems and software. To write a series of GEIS images corresponding to an HST dataset out to FITS format using the stacking method with a FITS TABLE extension, you could use a command like the following: fi> stwfits z03*.??h mta gftoxdim+ To write the same set of files to a FITS tape with each group in a separate image, you could use: fi> stwfits z03*.??h mta gftoxdim- VAX users should be aware that you may lose one digit of precision when using IEEE format because the VAX real*8 format uses 3 bits more than IEEE real*8. Under DEC OpenVMS with the new AXP architecture the bit counts are the same and there is no loss of precision. On-Line Help This section describes: * How to use IRAF's on-line help facility. * How to find a task that does what you want (page 26). If you need additional help, a table of contact information is provided in the preface to this manual. Getting Help On-line help is displayed by using the help command, which takes as an argument the task or package name about which you want help. Wildcards are supported. For example, to display the on-line help for the mkmultispec task, you would type: fi> help mkmultispec Figure 2.5: Displaying On-line Help Two STSDAS tasks are available that will display only certain sections of the help file: * examples - Displays only the examples for a task. * describe - Displays only the description. There is an optional paging front-end for help called phelp. For more information, type help phelp from within IRAF. Finding Tasks There are three ways to find a task that does what you need: * Use the apropos or references task to search the online help database. * Look at the package structure to find the right package. * Typing help package will list one-line descriptions of each task in the package. * Ask. A more experienced user can usually point you in the right direction. You can also request help from the Help Desk staff at STScI by sending e-mail to: help@stsci.edu Using apropos The apropos task looks through a list of IRAF and STSDAS package menus to find tasks that match a specified keyword. Figure 2.6 shows how to use apropos. Note that the name of the package in which the task is found is shown in parentheses. STSDAS Structure STSDAS version 1.3 is structured so that tasks are logically grouped. In many cases, you can easily find a task that does whatever function you need simply by looking in the appropriate package. For example, all of the tasks that are used in the calibration process can be found in the hst_calib package and all tasks used for image display and plotting can be found in the graphics package. Figure 2.7 shows the STSDAS package structure. Figure 2.7: STSDAS Version 1.3 Package Structure Troubleshooting There are a couple of easy things you can do to make sure that you don't have a simple memory or parameter conflict--common causes of problems. * Look at the parameter settings and make sure that you have specified reasonable values for every parameter. * Use the flprcache command to clear the process cache. To do this type: flpr You may need to do this twice in succession. * Occasionally, you may need to logout of the CL, restart IRAF, and try your command again. If you still have a problem, contact the Help Desk. E-mail addresses and phone numbers are provided in the Preface of this manual. STSDAS Tables Several of the analysis packages in STSDAS create output files in STSDAS table format, which is a binary row-column format (ASCII-format tables are also supported). You may also find STSDAS tables on your HST data tape since all instruments other than the WF/PC-1 use STSDAS-format tables in the calibration process. Tables can be viewed, created, and manipulated using the tasks in the ttools package (or in the external TABLES package). You can read a description of the table data format in the STSDAS User's Guide. To simply read the contents of a table, you can: * Use tread to display the file and move through it using the arrow keys. * Use tprint to display the file in a formatted manner. Other tasks are available in ttools that will let you edit tables or create new ones, or that will let you manipulate the data, convert formats, or perform various database functions with the tables. See the online help for details. Other Topics Other Image Tools... Both IRAF and STSDAS contain a huge number of tasks that work with images. Some of the packages that you should investigate are: * images: This package includes general tasks for copying (imcopy), moving (imrename), and deleting (imdelete) image files. These tasks operate on both the header and data portions of the image. The package also contains a number of general purpose tasks for operations such as rotating and magnifying images. * stsdas.toolbox.imgtools: This package includes general tools for working with multigroup GEIS images. This package includes tasks for working with masks, and general purpose tasks for working with the pixel data, such as an interactive pixel editor (pixedit). * stsdas.analysis: General tasks for image analysis. Displaying HST Images This section will primarily be of interest to WF/PC-1, WFPC2, and FOC observers, it explains: * How to display images in IRAF. * Some tips for working with HST multigroup format images (page 31). * How to combine the four groups of a WF/PC-1 or WFPC2 image into a single mosaic image using either the STSDAS wmosaic or qwmosaic tasks (page 32). Image Display with the IRAF display Task The most general IRAF task for displaying image data is the display task. This is the best choice for a first look at FOC, WF/PC-1, or WFPC2 data, but it can also be used to display HSP area mode images. To display an image, you need to: 1. Start an image display server (we use SAOimage) in a separate window from your IRAF session (i.e., either from a different xten-n window, or as a background job before starting IRAF).^6 To start SAOimage, type the following in any xterm or other system window: saoimage & 2. Load the images and tv packages (from the window where you're running IRAF): cl> images im> tv 3. Display the image using the IRAF display task: tv> display rootname.c1h Figure 2.8 shows how to display group 2 of a WF/PC-1 image. ^6 There are a few choices for display servers, including SAOimage, SAOtng (the next generation of SAOimage), and Ximtool. The most popular choice at present is SAOimage, and it will be described here. SAOtng may be retrieved via anonymous FTP from sao-ftp.harvard.edu in the directory ~ftp/pub/rd. Ximtool may be retrieved via anonymous FTP from iraf.noao.edu in the directory ~pub/v2lO3-beta. Figure 2.8: Displaying an Image You can print a hardcopy of the displayed image using the SAOimage command buttons: 1. Click "etc". 2. Click "print". Modifying the Display There are two ways to change the way in which your image is displayed: * SAOimage command buttons can change zooming, panning, etc. * Image intensity can be changed by resetting display task parameters. Once the image appears in your SAOimage window, you can use the SAOimage commands displayed near the top third of the image window to manipulate or print your image. These commands are described in the SAOimage Users Guide, although most are fairly intuitive (on-line help is also available at the system level, type man saoimage if you are using a Unix system or help saoimage if you are using VMS). Click on buttons to scale the image, pan, print, or perform other commonly-used functions. The example in Figure 2.8 shows how you should display the image for a first look. By default, display will automatically scale (autoscale) the image intensity using a small sampling area in the center of the image. During your first look, you may want to experiment with the image intensity scaling to improve the look of the image. This can be done using the zscale, zrange, z1 and z2 parameters. zscale is a switch that turns the autoscaling off and on; by also setting zscale- and zrange+ you can use the minimum and maximum values in the entire image as the minimum and maximum intensity values. To choose your own minimum and maximum intensity display values, set zscale-, zrange-, z1 to the minimum value and z2 to the maximum value that you want displayed. For example: im> disp w0mwO5O7v..c0h zrange- zscale- z1=2.78 z2=15.27 Notice in Figure 2.8 that when you run display, the task shows you the z1 and z2 values that it calculated. You can use these starting points in estimating your own reasonable values for minimum and maximum intensity values. More information about these parameters is provided from within IRAF by typing help display. Working with Image Sections and Groups You display a specific group of an image by specifying the group number in square brackets following the file name. You can also work with only part of the total image by specifying a pixel range, also enclosed in square brackets. The range is the starting point and ending point, with a colon separating the two. The x (horizontal) axis is specified first, then the y (vertical) axis, separated by a comma. For example, to specify a pixel range from 101 to 200 in the x direction and all pixels in the y direction from group 3 of an image, you would use a command such as: tv> display image.hhh[3][101:200,*] 1 If you use both group and image section syntax together, the group number must come first. Figure 2.9: Displaying Sections and Groups of an Image Mosaic WF/PC-1 Images WF/PC-1 images have four groups: one for each detector. You may wish to see the four images combined into one larger image. (A sample mosaic WF/PC-1 image is shown in Figure 2.10.) The qwmosaic task in the STSDAS hst_calib.wfpc package will quickly combine the four groups of a WF/PC-1 image into a single mosaic image that shows each group in one of the four quadrants of the image with roughly the correct orientation. Note that qwmosaic is a quick version of wmosaic and is recommended for a quick first look at the data in mosaic form. If you need precise astrometric alignment, use the wmosaic task. To produce a quick mosaic image of the four VYT/PC- I groups: 1. Run the qwmosaic task to combine the groups into a separate image with all groups combined. 2. Display the image using display (as described on page 29). Figure 2.10: WF/PC-1 Mosaic Image Analyzing HST Images This section describes methods for using STSDAS and IRAF to work with two-dimensional image data from HST. Subjects include: * A discussion of RA and Dec and how to work with coordinate information. * Using the photometric keywords. * Removing cosmic rays and image defects. * A discussion of various image plotting and manipulation tasks. In Table 2.3 through 2.6 we provide a list with brief descriptions of STSDAS and IRAF tasks you may find useful for working with your HST images. Many of these tasks are described in more detail in the sections below. Table 2.3: Image Arithmetic and Transformation Tasks Task Package Purpose ------------------------------------------------------------------------------ crrej stsdas.hst_calib.wfpc Combine images to make an image free of cosmic rays^a gcombine stsdas.toolbox.imgtools Combine images using various algorithms and rejection schemes geomap images Compute a coordinate transformation geotran images Resample an image based on geomap output imcalc stsdas.toolbox.imgtools Perform general arithmetic on images^a imexpr images General image arithmetic. magnify images Magnify an image rotate images Rotate an image wmosiac stsdas.hst_calib.wfpc Mosaic four WF/PC-1 or WFPC2 frames into a single image^a ------------------------------------------------------------------------------ a. Handles GEIS multigroup images. Table 2.4: Image Coordinate Tasks Task Package Purpose ------------------------------------------------------------------------------ metric stsdas.hst_calib.wfpc Translate WF/PC-1 and WFPC2 pixel coords. to RA and Dec (with geo. corr.)^a pixcoord stsdas.hst_calib.wfpc Compute pixel coordinates of stars in an image^a rimcursor lists Determine RA and Dec of a pixel in an image wcscoords xray.xspatial Use WCS to convert between IRAF coordinate systems xy2rd stsdas.toolbox.imgtools Translate two-dimensional image pixel coordinate to RA and Dec ------------------------------------------------------------------------------ a. Handles GEIS multigroup images. Table 2.5: Image Display and Graphics Tasks Task Package Purpose ------------------------------------------------------------------------------ compass stsdas.graphics.sdisplay Draw north and east arrows on image display or into image itself display tv.display Display an image disconlab stsdas.graphics.sdisplay Display image and optionally overlay contours and coordinate grid fieldplot stsdas.graphics.stplot Graph vector field imexamine images.tv Examine images using display, plots, and text implot plot Plot lines and columns of images newcont stsdas.graphics.stplot Draw contours of two-dimensional data north stsdas.hst-calib.ctools Display orientation of image based on header keywords sgraph stsdas.graphics.stplot Plot spectra and image lines, overplotting error bars siaper stsdas.graphics.stplot Plot science instrument apertures of HST wcslab stsdas.graphics.stplot Produce sky projection grids and labels for images ------------------------------------------------------------------------------ Table 2.6: Image Manipulation and Utility Tasks Task Package Purpose ------------------------------------------------------------------------------ boxcar image Boxcar smooth a list of images gcopy stsdas.toolbox.imgtools Copy multigroup images^a grlist stsdas.graphics.stplot List file names (with groups) for all groups in image (make lists) gstatistics toolbox.imgtools Computer image statistics^a imedit images.tv Fill in regions of an image by interpolation for nearby regions noisemodel stsdas.hst_calib.wfpc Determine noise model parameters from CCD frames^a plcreate xray.ximages Create a pixel list from a region file (e.g., from SAOimage) saodump stsdas.graphics.sdisplay Make image and colormap files from SAOimage display ------------------------------------------------------------------------------ a. Handles GEIS multigroup images. RA and Dec This section describes how to determine the orientation of your HST images and how to determine the RA and Dec location of a pixel or source on these images. Included in this section are: * An overview of the positional information for HST images. * Using the rimcursor task to determine RA and Dec. * Using the metric task to determine RA and Dec, taking into consideration geometric distortion. * Improving the astrometric zero-point accuracy. Overview of Positional Information Every calibrated WF/PC-1, WFPC2, and FOC image has an astrometric plate solution written in the standard FITS astrometry header keywords (CRPIX1, CRPIX2, CRVAL1, CRVAL2, and the CD matrix, CD1_1, CD1_2, CD2_1, and CD2_2). Tasks within IRAF or other packages can use this information to convert between pixel coordinates and RA and Dec. Table 2.7 lists some tasks in IRAF and STSDAS for working with positional information. Table 2.7: IRAF and STSDAS Tasks for Working with Positions Task Purpose ------------------------------------------------------------------------------ compass Plot north and east arrows on images disconlab Display image and overlay contours and coordinate grid metric Translate WF/PC-1 pixel coordinates to RA and Dec with geometric corrections north Display the orientation of an image based on keywords rimcursor Determine RA and Dec of a pixel in an image wcscoords Use WCS to convert between IRAF coordinate systems wcslab Produce sky projection grids for images xy2rd Translate pixel coordinates to RA and Dec ------------------------------------------------------------------------------ To find the RA and Dec of a pixel or source on an FOC image, you can use the rimcursor task directly on the c1h file.^8 To find the RA and Dec of a pixel or source on a WF/PC-1 image, you can either use the rimcursor task on the WF/PC-1 mosaic image, or you can use the metric task directly on each individual WF/PC-1 chip image (i.e., on the individual groups). Instructions for using rimcursor and metric are described below. Do not use the rimcursor or xyeq tasks directly on the images in the WF/PC-1 .c0h file if you require accurate relative positions. Calibrated WF/PC-1 images retain a residual distortion which will affect the accuracy of relative positions. Both wmosaic and metric correct for this distortion. 8. See "Limitations of the Calibration Process" on page 157 for information about calibrated FOC images. Using rimcursor The rimcursor task in the IRAF lists package can be used to determine the RA and Dec of a pixel on any image. The rimcursor task uses the astrometric header keywords to derive the coordinates of the cursor position. Cursor positions can be read off the image display (this is the default mode), or a file can be specified containing the (x,y) pixel positions. If you want to provide a text file of positions, set the value of the cursor parameter to the name of the text file. The text file must have the following format: 441. 410. 101 \040 208. 506. 101 \040 378.5 68.5 101 \040 Note that the last column can contain a letter In this example, the 441 and 410 on the first line is the pixel location whose RA and Dec you wish to know. The "101 \040" after the pixel coordinates is formatting information is expected by rimcursor (see the online help for details). To get RA and Dec in hh:mm:ss.dd and dd:mm:ss.d format, it is necessary to specify, for example: cl> rimcursor w0sn0307t.c0h wcs=world \ >>> wxformat=%12.2H wyformat=%12.1h The number after the decimal in the format specifications is the number of digits to print after the decimal in the seconds (of time or arc). The 12 is the field width. Capital "H" means divide the number by 15 (to convert from degrees to hours) before formatting, while "h" just prints in dd:mm:ss.d format. The specified pixel coordinates in this file can be fractional pixel values, perhaps derived through the use of a source centroiding task such as imcntr (found in the proto package) or from daophot (in the digiphot package). Using metric for WF/PC-1 and WFPC2 Images The metric task in the STSDAS hst_calib.wfpc package determines the RA and Dec of any pixel (from any group) in a WF/PC-1 or WFPC2 multigroup GEIS file, taking into account (and correcting for) the geometric distortions of the specified CCD chip and its offset relative to the reference chip. The reference chip is always at group 2, i.e., for the Wide-Field Camera in WF/PC-1 it is WF2 and for the Planetary Camera it is PC6. After applying the geometric and transformation corrections, metric uses the reference chip's group parameters CRVAL1, CRVAL2, CRPIX1, and CRPIX2 in conjunction with the CD matrix coefficients to translate the corrected pixel coordinates to RA and Dec. The resulting RA and Dec will therefore be in the epoch of these group parameters, usually J2000. The metric task can take three kinds of inputs: * A single pair of (x,y) coordinates, specified by using the x and y input parameters. * A set of x and y pixels passed in a table, specified by passing the table and column name (separated by a space) for the x and y parameters. * Interactive cursor input, specified by setting x to "", which will cause the image to be displayed. You can then move the cursor to the desired position and pres; a key to g the coordinates. Pressing Q will end the task; pressing C will enable centroiding; pressing any other key will return the x,y coordinate at the cursor position. Centroiding is used to calculate a refined center position of a point source in the image by using the given position as an approximate initial value. Choosing a proper box size for centroiding is important: for example, if there is another star close to the target star, a large box will make the centroid lie between these two stars, instead of at the center of the target star. The algorithm used here is identical to the IRAF task imcntr. (Refer to the help file of imcntr for details of the algorithm.) Results are displayed on your terminal screen. The output will contain the following columns: * Original input pixel coordinates. * Pixel coordinates after centroiding (if applicable). * Coordinates after geometric corrections. * Coordinates after transformation to the reference chip. * RA and Dec. Figure 2.11 shows an example of metric being use to determine the centroided location of a source in a WF/PC-1 group 4 image via the interactive cursor input. Before running metric in this mode, you must start SAOimage (or another display server). See the discussion of image display on page 29. The metric task cannot be used on WF/PC-1 or WFPC2 mosaicked images. Use the rimcursor task directly on these images. Figure 2.11: Determining Centroided Star Position Using metric Improving Your Astrometric Accuracy Differential astrometry (measuring a position of one object relative to another in an image) is easy and relatively accurate for HST images, while absolute astrometry is more difficult due to uncertainties in the locations of the instrument apertures relative to the Optical Telescope Assembly (OTA or V1) axis and the inherent uncertainty in the Guide Star positions. However, if you can determine the position of any single star in your HST image, then your absolute astrometric accuracy will be limited only by the accuracy with which you know that star's location and image orientation. If there is a star on your image suitable for astrometry, you may wish to extract an image of the sky around this star from the Digitized Palomar Sky Survey and measure the position of that star using, for example, the GASP software. This can provide an absolute positional accuracy of approximately O".7. Contact the Help Desk for assistance (send e-mail to help@stsci.edu). Photometry Included in this section are: * A description of how to use the header keyword information to convert from counts to flux or magnitude. * A description of some of the tasks within IRAF and STSDAS that may be useful for determining source fluxes. * A description of how to use the STSDAS synphot package to re-derive your flux scale. Converting Counts to Flux or Magnitudes All calibrated HST images have units of counts or Data Numbers (DN). During the correction for absolute sensitivity in pipeline calibration, calfoc, calwfp, and calwp2 do not alter the units of the pixels in the image. Instead they calculate and write the conversion factors, the inverse sensitivity PHOTFLAM, and the zero point of the magnitude scale, PHOTZPT, into header keywords in the calibrated data. For WF/PC-1 and WFPC2 images, these values are written as group parameters (i.e., they are assigned for each group). To convert from data numbers (or counts) to flux in units of erg/cm^2/s/A, multiply by the value of the PHOTFLAM header keyword and divide by the value of the EXPTIME keyword (exposure time). You can use the STSDAS task imcalc to convert an entire image from counts to flux units. You can use hedit on an FOC image to determine the values of the PHOTFLAM and EXPTIME header keywords and then convert the image to flux units using imexpr. For example: st> imexpr "a*b/c" outimage.hhh \ >>> a=x0fp0102t.c1h b=a.photflam \ >>> c=a.exptime If, instead, you wish to convert to magnitudes, magnitudes are computed as: m = -2.5 x log10 (F) + PHOTZPT where: * PHOTZPT is the zero-point of the ST magnitude scale as given by the value of the PHOTZPT header keyword. * F is the flux = counts x PHOTFLAM / EXPTIME. The zero point of the ST magnitude system is -21.10. This value is chosen so that Vega has an ST magnitude of zero for the Johnson V passband (see Koornneef et al., 1986; Home, 1988; and the Synphot Users Guide). The inverse sensitivity (as given by the PHOTFLAM) keyword is the key to converting from counts or DN to flux units. PHOTFLAM is defined to be the mean flux density (i.e., the flux density of a source with a spectrum which is flat in fk across the bandpass of your observation) that produces a count rate of 1 per second with the HST observing mode (PHOTMODE) used in your observation. If the spectrum of your source is significantly different from flat (e.g., if you have observed a source dominated by an emission line or with a strong slope across the bandpass), you may wish to recalculate the conversion factor from counts to flux units for your particular source, using synphot (below). In addition, WF/PC-1 observers should note that the PHOTFLAM value calculated by calwfp does not include a correction for temporal variations in the throughput of the instrument due to contamination buildup. For WF/PC-1 observations of sources blueward of 5000 A, a correction should be applied to the flux calibration of your images for this effect. Likewise, FOC observers should note that the PHOTFLAM value calculated by calfoc does not correct for format differences in sensitivities; FOC observations made in formats other than 512 x 512 should be corrected for this effect (see Table 8.4: "Format-Dependent Sensitivity Ratios." on page 166). If your HST image contains a source whose flux you know from ground based measurements, you may choose to check or determine the final photometry of your HST image from the counts observed for this source. Using Synphot to Re-derive your Flux Scale The conversion of observed count rates to absolute fluxes (i.e., the process of flux calibration) is accomplished for HST imaging detectors by using both knowledge of the instrumental sensitivities and principles of synthetic photometry. In this section we introduce the theories and principles (below) and then provide a section of examples of how to actually use synphot (page 44). Theory The detected count rate through a broad passband P is given by: C(P) =A/hc integral[P(lambda) flambda(lambda)lambda dlambda] where: * A - is the nominal collecting area of the telescope. * flambda(l) - is the flux density as a function of wavelength of the target. * P(l) - is the dimensionless bandpass throughput function. * the division by hv=hc/l converts the energy flux to a photon flux, as is appropriate for photon-counting detectors. Alternatively, we can write this expression as: C(P) = flambda(P)/Ulambda(P) where: * flambda(P) - denotes the appropriate average of the star's flux density spectrum over the passband P. * Ulambda(P) - is the flux density required to produce a unit response of 1 count s^-1 within the passband and is referred to as the inverse sensitivity of the photometric mode. Following this formulation, the precise definition of mean flux density in a broad passband must be: flambda(P) = integral[P(lambda)flambda(lambda)lambda dlambda] ---------------------------------------------------- integral[P(lambda)lambda dlambda and the count rate to flux density conversion factor for a broad passband is: Ulambda(P) = hc/( A integral[P(lambda)lambda dlambda ) So for a given observed count rate C, the corresponding mean flux density within the passband is: flambda(P) = Ulambda(P)C Corresponding magnitudes are simply given by: malmbda(P) = -2.5 log Ulambda (P) C + K where K is chosen for convenience to be -21.10 (for Ulambda in units of erg cm^-2 A^-1) so that mlambda (5500 A) is approximately equal to the Johnson V magnitude. Notice that the mean flux density depends on the spectrum and the passband shape but not on its overall normalization, while the conversion factor U depends only on the integrated area of the passband--it does not depend on the spectral shape of the source being observed. The value of U is computed during the routine pipeline processing of WF/PC-1 and FOC imaging observations based on the known throughput function of the observing mode in use and is recorded in the image header keyword PHOTFLAM. To compute the mean absolute flux density of a source within the observed passband one merely has to multiply PHOTFLAM by the observed countrate of the source. This process yields the correct value for the mean flux density of the source as multiplied by the passband in use, no matter what the spectral characteristics of the source may be. This does not necessarily yield the intrinsic flux density of the source at the effective wavelength of the passband. Only in the case of a source with a spectrum that has constant flambda as a function of wavelength will this calibration process yield the intrinsic flux density of the source at any wavelength within the passband. For example, Figure 2.12 shows flux spectra for two blackbody sources, with temperatures of 1500 K and 15000 K over the wavelength range of the WFPC2 F569W passband (PHOTMOD). Both spectra have been normalized so that they produce an observed count rate of 100 counts per second in the passband. For this passband PHOTFLAM=7.26E-19 erg/cm^2/A/count, hence both of these spectra have a mean flux density of 7.26E-17 erg/s/cm^2/A within this passband. In the case of the 15000 K source, the mean flux density also happens to be nearly identical to the actual flux density at the effective wavelength (or pivot wavelength in synphot terminology) of the passband (~5657 A). As you can see from the figure, the flux density of the 1500 K source at the pivot wavelength of the passband is significantly lower (~4.8E-17 ergs/s/cm^2/A). In Figure 2.12, the dotted line shows the shape of the passband throughput function. The mean flux within the passband is indicated by the filled circle and the horizontal line on either side shows the equivalent FWHM extent of the passband. The open circle indicates the flux density of the 1500 K spectrum at the pivot wavelength of the passband. Figure 2.12: Spectra of a 1500 K and a 15000 K Blackbody, Normalized to Produce 100 counts/sec in the WFPC-2 F569W Observing Mode Using Synphot If the spectral type of a source is known, the intrinsic flux density spectrum can be computed using tasks from the STSDAS synthetic photometry (synphot) package. This is done by using the synphot tasks to create a spectrum that produces the same observed countrate within a given passband as what was obtained in an actual WF/PC-1, WFPC2, or FOC image. If the observed source is known to have a stellar-like spectrum, a matching spectral type can be found in one of several spectral atlases that are available for online use at STScI and can be exported to off-site locations (see page 93 of the Synphot User's Guide for more details on the availability and contents of these atlases). Alternatively, one can create simple blackbody and powerlaw spectra using built-in synphot spectral synthesis capabilities, or you can use your own independent spectral data for the same source or similar sources. For example, let's say you have an FOC observation of a source that is known to have a G5V spectral type and the observation used the f/96 relay and the f430w filter. Let's also say that the source produced an observed countrate of 152.3 counts s^-1 in the image. There is a spectrum of a G5V star in table number 51 of the Bruzual-Persson-Gunn-Stryker spectral atlas. You can use the synphot calcspec task to create a G5V spectrum that produces this same countrate as shown below. Figure 2.13: Creating G5V Spectrum with calcspec sy>cal spec "rn(crgridbpgs$bpgs_51,band(foc,f/96,f430w),152.3,counts)" \ >>> g5spec.tab flam Here we are using the renormalize, "rn", function to produce a spectrum that has the desired observed countrate. The renormalize function takes four arguments: the input spectrum, which in this case we are reading from the table crgridbpgs$bpgs_51.tab; the passband over which to compute the normalization, which in this example is defined by the FOC,f/96,F43OW observing mode; the desired normalization value, 152.3; and finally the units of the normalization value, counts. The computed spectrum will be placed in the output table g5spec.tab and the units of the output spectral data are specified to be in units of flambda by the parameter flam. The resulting spectrum can be plotted using the synphot task plspec. From this you can determine the spectral flux density at any desired wavelength. The format of some input to synphot tasks was changed with the introduction of a new expression evaluator in November 1993. You may need to update your software or see the Synphot User's Guide to find the equivalent syntax for the older versions. Other synthetic photometry operations can also be performed on this spectrum, such as determining what the flux density or magnitude of this source would be in the standard Johnson-Cousins UBVRI passbands or in other HST passbands. This can be accomplished using the calcphot task as follows: sy> calcphot flam v g5spec.tab which will calculate the mean flux density of the spectrum within the Johnson V passband, which turns out to be 4.33 x 10^-16 ergs/S/cm^2/A, or: sy> calcphot vegamag v g5spec.tab which will calculate the V magnitude (relative to Vega) of the spectrum (which is 17.30), or: sy> calcphot counts wfpc2,4,F439W g5spec.tab to determine what countrate this source will produce in the WFPC2 F439W passband for detector WF4 (80 counts per second). IRAF and STSDAS Photometry Tasks In Table 2.8 we list some of the tasks available within IRAF and STSDAS for determining source (or background) counts and magnitudes. A general discussion of doing photometry in IRAP is described in the document "Photometry Using IRAF" by Lisa A. Wells, February 1994. Available from NOAO via anonymous FFP to tucana. noao.edu. See also "A User's Guide to Stellar CCD Photometry with IRAF," by P. Massey and L. Davis, 1992, also available from NOAO. Table 2.8: Tasks and Packages for Determining Counts and Magnitudes Task or Purpose Package ------------------------------------------------------------------------------ daophot and Tasks for doing stellar photometry, see online help files for apphot details. daophot contains tasks for doing crowded-field photometry, apphot for doing aperture photometry. Both are in the digiphot package. imstat Compute and print image pixel statistics (in images package). imcnts Sum counts over specified region, subtracting background. Can be used with mask file (such as from plcreate). In xray.xspatial package. wstat Compute and print image pixel statistics (in STSDAS hst-Calib.wfpc). plcreate Create a pixel mask from a region descriptor (in xrayximages). isophote Tasks for fitting and graphing elliptical isophotes (in STSDAS analysis) package. See online help for details. ------------------------------------------------------------------------------ The apphot package provides an option for measuring flux within a series of concentric apertures--this can be used to determine the flux in the wings of the PSF, which is useful if you wish to estimate the flux of a saturated star by scaling the flux in the wings of the PSF of the saturated star by the ratio of total to wing flux for unsaturated stars. Removing Cosmic Rays and Image Defects There are many tasks in IRAF and STSDAS that can be used to remove cosmic rays and other image defects from your HST images. In Table 2.9, we list some of these tasks and briefly explain their functions. Of particular interest to HST observers are: * The crrej, gcombine, and cosmicrays tasks to remove cosmic rays from WF/PC-1 or WFPC2 images. * The imedit task to remove image defects. This is particularly useful for removing reseau marks from FOC images. These are described in more detail below. note: FOC images do not suffer from cosmic rays. Table 2.9: Tasks for Removing Image Defects Task Function ------------------------------------------------------------------------------ cosmicrays Detect and replace cosmic rays in a single image, based on flux of each pixel relative to mean flux of its neighbors-works on multigroup images (in noao.imred.ccdred package). gcombine Combine a set of images to produce a weighted average or median, pixels may be rejected using various algorithms (in stsdas.toolbox.imgtools package). crrej Generate an image free of cosmic rays from multiple exposures of the same field-works with group format images (in STSDAS hst_calib.wfpc package). imedit General utility for examining and editing pixels in an image (in the images.tv package). rremovex Remove reseau marks from FOC images by filling pixels under reseau marks with average of neighboring pixels (in the STSDAS hst_calib.foc.focphot package). ------------------------------------------------------------------------------ Removing Cosmic Rays If you have multiple images of the same field, you can use the crrej task to remove cosmic rays and image defects. If you have only a single image of a given field, you can use the cosmicrays task to detect cosmic rays and replace them with the average value of the surrounding pixels. Using the cosmicrays Task The cosmicrays task (in noao.imred.ccdred) detects potential cosmic rays (or more generally, potential bad pixels) by searching for the brightest pixel which is greater by an amount defined by the threshold parameter than the average value of the pixels (excluding the two brightest pixels) in a given area, where the size of the area is defined by the window parameter. The task then subtracts a background plane from the pixels in the window (determined by fitting to the edge pixels of the window). The potential cosmic ray pixel is now declared a bad pixel if its background subtracted value divided by the mean (background subtracted) flux in the window is greater than the value of the fluxratio parameter. The cosmicrays task records the mean background subtracted flux in the window and the ratio of the bad pixel and background fluxes. The task only considers the brightest pixel in a given window as a potential bad pixel in a given pass, however you can specify the number of detection passes the task should perform by setting the npasses parameter. A plot can be made of the flux ratio as a function of the mean flux and an interactive mode is available (by setting the parameter interactive=yes) which allows you to examine the pixels flagged by the task, alter the threshold and fluxratio levels, delete or undelete pixels flagged by the task, and display a surface plot of the window around any candidate. The cosmicrays task can create a bad pixel file (specified by the badpix parameter) which contains a list of the coordinates of the bad pixels found by the task. You may also wish to examine the image statistics (for example, using gstatistics) of your pre- and post-corrected image to assure that you have not flagged real features. The cosmicrays task works on only a single group of a multi-group file at a time. Using crrej The crrej task (in the STSDAS hst_calib.wfpc package) is designed to take multiple exposures of a given field and combine the images while rejecting very high counts in each pixel stack. It is a general purpose task, but was designed with WF/PC-1 images in mind. The task begins with a guess image (usually either the minimum or median of the input images, as specified through the initial parameter). Cosmic ray pixels are selected as those pixels with values greater than or less than N times the noise (where N is specified through the sigmas parameter). The noise is determined using the noisepar parameter set. The noisepar parameters specify the coefficients of the model used to estimate the noise. The total noise is the square-root of the quadratic sum of the pixel DN value (scaled by the detector gain specified by the gain parameter), the CCD readout noise (specified by the readnoise parameter), and a term that scales as a percentage of the DN value (as given by the scalenoise parameter). Stack pixels which are N times the noise bigger or smaller than the guess image value are considered bad and are not used in creation of the output, cosmic-ray free, average image. It is possible to automatically discard pixels adjacent to rejected pixels (by setting the radius parameter) or to reject adjacent pixels based on their own rejection threshold (the pf actor parameter). You can also disable the discarding of neighboring pixels for rejected pixels greater than a certain value (as specified through the hotthresh parameter) This is useful for dealing with hot pixels, which do not affect neighboring pixels as do cosmic rays. You may wish to perform the rejection process several times to allow the solution to slowly achieve an equilibrium and to allow information about rejected pixels to propagate into adjacent pixels. The default setting sigmas=4, 4, 3, 2 runs the program four times with N=4, 4, 3, and 2, respectively, in the four iterations. The crrej task will work on multigroup images. See the online help for more details. Note that the task assumes that the images in the stack are registered (which will be the case with VYF/PC-1 CR-SPLIT images). If your images are not registered, you will need to first register them before using crrej. The crrej task should only be used on a stack of images where the input images have similar exposure times, and the images are well aligned. Using imedit The imedit task in the IRAF images.tv package is a sophisticated interactive image editing task that allows you to select pixels for editing and to replace them with either a value derived from neighboring pixels, a value derived from a user-defined background area, or a predetermined value (to flag pixels, for example). The pixels to be edited can be selected interactively, or from a list. Background-subtracted image statistics can be determined and displayed for subregions around the selected pixels. The image can be redisplayed after each editing of a pixel (or after some number of pixel edits). Extensive online help for imedit is provided, which includes detailed instructions on setting input parameters. Removing Reseau Marks from FOC Images One way that you might use imedit is to remove reseau marks from an FOC image. If you need to remove only a few reseau marks from your image, and they are obvious, then the easiest approach is to remove them interactively. If you have an accurate list of positions for the reseau marks on your image, you can automatically remove them by feeding a cursor file to imedit. Lists of reseau positions are provided on STEIS. However, since the reseau marks move over time, the positions in this list will not accurately match the positions of the reseau marks in your image. You can still use these lists as inputs to imedit, but you will need to turn on the search option of imedit. Figure 2.14 shows an example of a cursor file to remove four reseau marks. Here, we have enabled the search option by specifying search = -5; this will search an area around the pixel location specified and find the minimum pixel in the region. Nevertheless, this method will only work if the background level in your FOC image is high enough that imedit will recognize the reseau marks as the local minimum. Figure 2.14: Sample imedit Cursor File for Removing FOC Reseau Marks ------------------------------------------------------------------------------ # Set parameters that are different from the defaults. :aperture square :search -5. :radius 3. :buffer 5. :width 5. :value 0. :sigma INDEF :xorder 2 :yorder 2 # These are the X and Y pixel coordinates of four reseau marks # to be removed. The "b" at the end of the line is the cursor key # that one would use in interactive mode. 248 310 1 b 313 317 1 b 243 369 1 b 308 376 1 b ------------------------------------------------------------------------------ It you have a flatfield image taken before or after your science image, then you can find reseau marks in the flats using the resfind task (in the hst_calib.foc.focgeom package). This tasks works non-interactively to produce a reseau table (a STSDAS binary table with the pixel locations of the reseau marks) as output. To use this table as input to imedit, you will first need to convert it to an ASCII table. To do this run rprintx on the reseau table, setting plain=yes and directing the output to a file, for example: fo> rprintx res.table plain=yes > reseau.cur After running rprintx, you need to edit the file to append " 1 b" to each line. That file could then be passed as the cursor file to imedit. When running imedit on a file created in this way, turn off the search option (set search = 0), since the reseau positions determined from the flats should accurately match those in your science images. Plotting and Manipulating Image Data This section describes two basic tools for manipulating and working with image data, such as implot, imexamine, and contour. Using implot The IRAF implot task (in the plot package) will let you interactively examine an image by plotting data along a given line (x axis) or column (y axis). When you run the task, a large number of commands are available, in addition to the usual cursor mode commands common to most plotting tasks; a complete listing of commands is found in the on-line help, but the most common are listed in Table 2.10. Figure 2.15 shows an example of how to use the implot task. Table 2.10: Basic implot Commands Keystroke Command ------------------------------------------------------------------------------ ? Display on-line help L LPlot a line C Plot a column Q Quit implot Space Display coordinates and pixel values ------------------------------------------------------------------------------ Figure 2.15: Plotting Image Data with implot Using imexamine The IRAF imexamine task (in the images.tv package) is a powerful task that integrates image display with various types of plotting capability, and that provides the ability to do simple photometry measurements. Commands can be passed to the task using the image display cursor and the graphics cursor. A complete description of the task and its usage are provided in the online help, available from within the IRAF environment by typing help imexamine. Contour Plots Contour plots of image data (either WF/PC-1, WFPC2, or FOC) can be created using the contour task in the IRAF plot package, or the newcont task in the STSDAS graphics.stplot package. Figure 2.16 is an example of how to use the task with a WF/PC-1 image. The newcont task provides much more flexibility in the specification of contour levels to be plotted, computes contours using a more advanced algorithm, and allows the perimeter to be labeled in world coordinates. Displaying HST Spectra This section explains how to generate plots of FOS or GHRS spectra for a quick first look. For FOS and GHRS data, the final calibrated data encompasses two files: the .c1h file, which contains the calibrated flux values for each pixel and the .c0h file, which contains calibrated wavelength values for each pixel. Before we delve into the individual plotting tasks, we will begin with some general information about producing hardcopy plots and PostScript. There are three basic plotting tasks that can be used to produce a quick-look plot of FOS or GHRS spectra: * The fwplot task produces a flux vs. wavelength plot of your spectra. * The grspec task plots all (or multiple) groups from a given dataset to a single plot. * The splot task (in the onedspec package) can be used to plot and analyze simple one-dimensional spectra. Each of these tasks will be described in more detail in subsequent sections. Producing Hardcopy In this section we will describe how to get hardcopy from most IRAF tasks, how to use the Interactive Graphics Interpreter (IGI) in conjunction with plots, and how to produce PostScript plots. Hardcopy Plots from fwplot and Most IRAF Tasks To print a copy of the displayed plot: 1. Type=gcur in the command window (where your CL prompt is located). 2. Move the cursor to any location in the graphics window. 3. Press = to write the plot to the graphics buffer. 4. Type q to exit graphics mode. S. At the cl prompt, type gflush. Plots will be printed on the printer defined by the environment variable stdplot. (See "Setting Environment Variables" on page 22). If you wanted the plot saved to a PostScript file, for use with other applications, you could do so using the psikern PostScript kernel, which is described in "PostScript Plots" on page 52. Using igi As your plotting needs grow more sophisticated-and especially as you try preparing presentations or publication-quality plots-you should investigate the Interactive Graphics Interpreter, or igi. This is a task in the STSDAS stplot package that can be used to draw axes, error bars, labels, and a variety of other features on plots. Options are available to use a number of different line weights, font styles and feature shapes, enabling you to create complex plots. Figure 2.17 shows a sample plot created in igi, however, because igi is a complete graphics environment in itself, it is well beyond the scope of this document. You can learn more about igi by requesting a copy of the IGI Reference Manual from the Help Desk at STScI (help@stsci.edu). Figure 2.17: Sample igi Plot PostScript Plots Using the PostScript kernel, psikern, you can create PostScript files from your plots that can be easily incorporated into papers, reports, or other documents created by LATEX, FrameMaker, or other applications that understand PostScript graphics. You can also use psikern to add color to your plots and to use PostScript fonts. There are two commonly-used ways to create a PostScript file of your plot (additional features are explained in the on-line help): 1. With the plot displayed, type: : .snap psi_port for a portrait-orientation plot (or psi_land for landscape orientation). 2. Before running your plot task, set the device parameter to psi_port or psi_land. The system will automatically name the file, and usually store it on a temporary disk. You should then move the file to an appropriate directory where it won't be deleted. Using fwplot GHRS and FOS observers can quickly produce a flux vs. wavelength plot of a spectrum using the fwplot task in the STSDAS hst_calib.ctools package. This takes a single argument--the name of the calibrated flux (.c1h) file. To use the fwplot task: 1. Load the hst_calib package. 2. Type the fwplot command, giving the filename and any group number that you want plotted. For example: hs> fwplot y0mw070dt.c1h[6] Figure 2.18: Simple Flux Versus Wavelength Plot of Spectral Data Remember that for FOS ACCLTM mode data, the last group contains the spectrum for the full exposure, while for GHRS ACCUM mode data, the groups are separate subintegrations that need to be added together to produce the final spectrum (see page "Addition With Wavelength Alignment" on page 57 and "FP-SPLITs" on page 375). Error Bars in fwplot The fwplot task can print error bars from the statistical error file along with the data. To print error bars, set the parameter plterr=yes. For example: ct> fwplot y0gq0106t.c1h[6] plterr=yes grspec The grspec task can be used to plot many groups from a single image on a single plot. The task takes as input the name of the image and a list of group numbers, or range of group numbers, to plot. For example, to plot groups 2 and 5, you could use a command such as: st> grspec y0mw070dt2.c1h 2, 5 Note that the group number is passed as a separate parameter, not with the standard group syntax. You could also plot a range by using a dash instead of a comma between the group numbers, for example. Figure 2.19 shows an example of the grspec task being used to plot groups 2 through 5 of a GHRS spectra. st> grspec z0d80106t.c1h 2-5 Figure 2.19: Using grspec Splot The splot task (in the onedspec package of IRAIF) plots and fits one-dimensional spectral data. You can use the task directly on your calibrated HST spectra in two ways: * Plot flux versus pixel byu sing the .c1h file. * Plot wavelength versus pixel using the .c0h file. You will probably find splot most useful after combining the flux and wavelength information from the .c0h and .c1h files into a single file. How to do that (using the mkmultispec task) and how to use splot to analyze your HST data is described in detail in the next section. . Analyzing HST Spectra This section describes some STSDAS tasks that can be used for analyzing and manipulating spectral data. Included here are descriptions of how to: * Coadd spectra. * Combine wavelength and flux information. * Perform other general spectral analysis functions with STSDAS. Coadding Spectra There are several reasons why you may wish to coadd HST spectra, for example: * To produce a summed spectrum from the groups of a FOS or GHRS RAPID-mode observation. * To produce a final calibrated spectrum from GHRS observations taken in FP-SPLIT mode. * To combine spectra from separate exposures (i.e., to coadd datasets). In this chapter we describe two different ways to do this in STSDAS. The first method is to simply coadd the data, pixel by pixel. This is appropriate whenever there is no shift in wavelength or pixel space across the spectra you are coadding (e.g., for FOS rapid mode spectra or some GHRS rapid mode spectra). The second is to either use the wavelength information in the .c0h file or to cross-correlate spectral features in the spectra themselves (.c1h file) to align the data in wavelength space prior to coaddition. This method is appropriate for GHRS ACCUM mode spectra taken with FP-SPLIT, or when coadding individual datasets when the filter grating wheel has been repositioned between the two observations (causing the potential for a shift in wavelength space). Because there is no onboard Doppler compensation applied to GHRS rapid mode observations, for high resolution GHRS rapid mode spectra, you may wish to use the poffsets method to remove Doppler shifts due to spacecraft motion prior to coadding--see page 58. rcombine The rcombine task (in the STSDAS hst_calib.ctools package) coadds or averages, pixel-by-pixel, the groups of a single image. It is most useful for combining the groups from a GHRS or FOS rapid-readout image into a single spectrum with the full integration time. In addition to summing or averaging the groups of a group format image, rcombine will also (if requested) produce a data quality file and a statistical error file for the output image. The output data quality file is produced by propagating the maximum value of the input data quality values from the input groups, and the statistical error file is produced by propagating the statistical errors from the input groups in quadrature. Figure 2.20 shows an example of using the rcombine task. In this example, all of the groups from the original FOS rapid mode spectrum are averaged together to produce the final spectrum, and an output data quality and error file is produced. rcombine does not produce a wavelength file; you can use the wavelength information from the original .c0h file for FOS or GHRS spectra. Figure 2.20: Combining all Groups in FOS Image Using rcombine Addition With Wavelength Alignment The tasks poffsets and specalign can be used together to align and add spectra that are shifted in wavelength or in pixel space with respect to one another. The poffsets task determines the shift between the input spectra (which can either be the groups of data in a single image or several different images) and writes the shifts to an output table. The shifts can be determined either (1) using the wavelength information in the .c01 input files or (2) by cross-correlating features in the input spectra. The specalign task combines the spectra using the shift information provided by poffsets. For GHRS FP-SPLIT data, in which each group is a convolution of the spectrum with the photocathode response function, specalign can also be used to derive the photocathode response function (which is the constant background on which the spectral features are shifted) when it coadds the spectra. These two tasks are most commonly used to combine the groups in an FP-SPLIT GHRS observation, however, they are also useful when combining any spectra that may be shifted in wavelength or pixel space relative to one another. Using poffsets The poffsets task determines the shift between the input spectra and writes the shifts to an output table. In its default mode, the task determines the shifts by cross-correlating features in the input spectra. However, if the signal-to-noise ratio of the input spectra is too low, poffsets will be unable to produce a good solution. In that case you should align the spectra using the wavelength information in the .c0h files. This limits the accuracy of the alignment to the accuracy of the zero point of the wavelength calibration for the input images To use poffsets to determine the shifts by reading the wavelength information provided in the .c0h file (i.e., and not by correlating features in the individual input spectra), set the parameter usecorr=no in poffsets before running the task. Using specalign The specalign task uses the output shift table produced by poffsets to shift and co-add the input spectrum. In its default mode specalign produces: * An output file containing the combined flux spectrum. * An output file containing the wavelength solution for the combined spectrum. * A file containing the constant background relative to which the input spectrum were shifted (i.e., for GHRS FP-SPLIT mode data, the photocathode response function). If you wish to insert the wavelength calibration information into the header of the combined flux spectrum using World Coordinate System (WCS) keywords, rather than producing a separate file containing the wavelength data, set the parameter wavelength to "WCS". If you do not wish to produce a constant background file (if for example you are working with low signal-to-noise GHRS FP-SPLIT spectra or in cases where it is inappropriate), set the parameter niter=O in specalign. Example of poffsets and specalign Below we provide an example of using poffsets and specalign (Figure 2.21, with input and output spectra shown in Figure 2.22). In this example, we combine the groups in an FP-SPLIT GHRS observation into a single spectrum by cross correlating features in the input spectra. The inputs to poffsets are the input spectrum and the name of the output table into which the shift information is written. We then use specalign in its default mode to produce a combined spectrum and a photocathode response function. The inputs to specalign are the name of the shift table produced by poffsets, the name of the output flux spectrum, the name of the output wavelength spectrum and the name of the output granularity response function. Additional information is available in the online help for the poffsets and specalign tasks. The granularity ideally represents the response of the GHRS detector photocathode. Because it is a response function, the granularity should always be between zero and one. With low S/N data however, the granularity solution may have values greater than 1.5 or 2. This indicates that there was not enough signal to compute a proper granularity. Figure 2.21: Using poffsets and specalign Figure 2.22: Input and Output Spectra and Granularity Combining Wavelength and Flux Information Calibrated spectra are composed of separate wavelength (.c0h) and flux (.c1h) files. Before you can use most IRAF and STSDAS tasks (or tasks in other reduction packages) to analyze your data, you need to put wavelengths or dispersion information into the header of the flux file. There are several different ways to do this. The most appropriate method will depend on what type of analysis you wish to do with your data and which tasks you'll be using because not all tasks accept the same types of wavelength information. Some of the methods described here are: * Using mkmultispec: This is the best choice for most users because it preserves all flux information and is supported by many IRAF tasks and other packages. * Using STSDAS tables: This is the best choice for users who wish to work with analysis tools such as the STSDAS fitting package. * Using the resample task: This method loses information, but is simple and provides a format that is easily exported to other software. Mkmultispec The most elegant method of combining wavelength and flux information, and one that has no affect on the flux data at all, is to use the mkmultispec task to put IRAF multispec-format World Coordinate System (WCS) information into the headers of your flux files (a detailed discussion of the simple linear and more complex multispec coordinate systems is available by typing help onedspec.package while in IRAF). The multispec coordinate system is intended to be used with spectra having nonlinear dispersions or images containing multiple spectra; the format is recognized by all tasks in IRAF V2.10 or later. mkmultispec can put wavelength information into the flux header files in two different ways. The first involves reading the wavelength data from the .c0h file, fitting the wavelength array with either Legendre, Chebyshev, or spline functions, and then storing the derived function coefficients in the flux header file (.c1h) in the multispec format. A choice of either a Legendre, Chebyshev, or cubic spline (spline3) fitting function of order 4 or larger will produce essentially identical results, all having rms residuals less than 1.0E-4 A, which is much smaller than the uncertainty of the original wavelength information. Because the fits are so accurate, it is usually not necessary to run the task in interactive mode to examine them. Because mkmultispec can only fit simple types of polynomial functions to wavelength data, this method will not work well with FOS prism data, because of the different functional form of the prism mode dispersion solution. For prism spectra, use the header table mode of mkmultispec (see below) or the STSDAS tables method. The other method by which mkmultispec can incorporate wavelength information into a flux file is to simply read the wavelength data from the .c0h file and place the entire data array directly into the header of the flux (.c1h) file. This method simply dumps the wavelength value associated with each pixel in the spectrum into the flux header and is selected by setting the parameter function=table. To minimize header size, set the parameter format to a suitable value, for example, using format = 8.7g will retain the original 7 digits of precision of the wavelength values, while not consuming too much space in the flux header file. Tables Another way to use wavelength information with your flux spectra is to create an STSDAS table from your spectra; the table will contain columns of wavelength, flux, and (optionally) error values. This can be done using the imtab (image to table) task in the STSDAS ttools package. This method is necessary if you plan to use certain tasks-such as those in the STSDAS fitting package-which do not (yet) recognize the multispec format WCS header information. Figure 2.23 shows how you can easily create a table that contains wavelength, flux, and error data from a calibrated FOS dataset. Figure 2.23: Combining Flux and Wavelength Using Tables cl> imtab y0cy0108t.c0h[8] y0cy0108t.tab wavelength cl> imtab y0cy0108t.c1h[8] y0cy0108t.tab flux cl> imtab y0cy0108t.c2h[8] y0cy0108t.tab error This example is for an ACCUM mode observation where the last data group is number 8 and will produce a table containing three columns, labeled "wavelength", "flux", and "error". This method of storing spectral information in tables is also the best option if you want to join two or more spectra that were taken with different gratings into a single spectrum that covers the complete wavelength range of all of the individual spectra. Because the data are stored as individual wavelength-flux pairs, there is no need to first resample (and therefore degrade) the individual spectra to a common (linear) dispersion scale before joining them. For example, you could first create separate tables for the spectra from different gratings, and then append the two tables using the tmerge task: cl> tmerge n5548_h13.tab,n5548_h19.tab \ >>> n5548.tab append Note that you will first have to edit out any regions of overlapping wavelength from one or the other of the input tables so that the output table will be monotonically increasing (or decreasing) in wavelength. Using resample The method that produces the simplest--and therefore most widely used--wavelength information is to use the resample task which will resample your flux data onto a user-selected linear wavelength scale and insert values for the image header keywords CRVALL (starting wavelength) and CD1_1 (wavelength increment per pixel). Use of these particular keywords provides some degree of portability between systems because these are standard FITS keywords. This method has the most severe side-effects in that you not only lose information in your flux spectrum by resampling, but you also lose all further use of the accompanying error (.c2h) and data quality (.cqh) files which are not, and can not, be resampled. Spectral Analysis and Manipulation There are may IRAF and STSDAS tasks available that can be used to plot, manipulate, and fit features in your spectra. Tables 2.11 through 2.13 lists the most relevant tasks, the package in which they are found, the type of input they expect (e.g., flux spectra with wavelength information in WCS), and whether they will work with more than one group in a multiple group image and use the statistical error files in their analysis. In the next section we describe three particular tasks or packages that you may wish to use to analyze your data: * splot in the noao.onedspec package. * STSDAS fitting package. * specfit in the STSDAS contrib package. Table 2.11: Tasks for Fitting Spectra Task Package Purpose ------------------------------------------------------------------------------ fitprofs noao.onedspec Non-interactive Gaussian profile fitting to features in spectra and image lines nfit1d stsdas.analysis.fitting Fit 1-D nonlinear functions to spectra; uses wavelength & error for list/tbl. ngaussfit stsdas.analysis.fitting Fit multiple 1-D Gaussians to spectra; uses wavelength & error for list/tbl. sfit noao.onedspec Fit spectra with polynomial function (WCS header wavelength) specfit stsdas.contrib Fit multiple line profiles and continua to spectra splot noao.onedspec Fit multiple 1-D Gaussians and continua to spectra ------------------------------------------------------------------------------ Table 2.12: Tasks for Plotting Spectra Task Package Purpose ------------------------------------------------------------------------------ bplot noao.ondspec Plot spectra non-interactively; reads WCS header for wave scale fwplot stsdas.hst_calib.ctools Plot flux vs. wavelength for a single group of an FOS or GHRS image. Will plot errors from statistical error file. grplot stsdas.graphics.stplot Plot arbitrary lines from 1-D image; overplots multiple groups; no error or wavelength information is used grspec stsdas.graphics.stplot Plot arbitrary lines from 1-D image; stacks groups and reads WCS header for wavescale modeone stsdas.hst_calib.fos Restore and display FOS "Mode I" image rapidlook stsdas.hst-Calib.ctools Create and display a 2-D image of stacked 1-D images sgraph stsdas.graphics.stplot Plot spectra and image lines; allows overplotting of error bars and access to wavelength array if list or table input is used specplot noao.onedspec Stack and plot multiple spectra; reads wavelength in WCS header ------------------------------------------------------------------------------ Table 2.13: Utility Tasks for Working with Spectra Task Package Purpose ------------------------------------------------------------------------------ boxcar images Boxcar smooth a list of images; processes one group at a time continuum noao.onedspec Continuum normalize spectra; processes one group at a time gcopy stsdas.toolbox.imgtools Copy multigroup images gimpcor stsdas.hst_calib.fos Calculate and display FOS GIMP values grlist stsdas.graphics.stplot List file names including groups for all groups in image; used to make lists for tasks that do not use group syntax magnify images Interpolate spectrum on finer (or coarser) pixel scale poffsets stsdas.hst_calib.ctools Determine pixel offsets between shifted spectra rcombine stsdas.hst-calib.ctools Combine (sum or average) groups in a 1-D image with option of propagating errors and data quality values resample stsdas.hst-calib.ctools Resample FOS and GHRS data to a linear wavelength scale sarith noao.onedspec Spectrum arithmetic scombine noao.onedspec Combine spectra; reads WCS header keywords for wavelength specalign stsdas.hst-calib.ctools Align and combine shifted spectra (see poffsets) unwrap stsdas.hst-calib.fos Remove wrap from FOS data exceeding internal counter limit ------------------------------------------------------------------------------ splot The splot task in the IRAF noao.onedspec package is a good general analysis tool that can be used to examine, smooth, fit, and do simple arithmetic operations on spectra. Figure 2.24 shows a sample plot produced by splot. Like all IRAF tasks, splot will only work on one group at a time from a multigroup file. You can specify which group is to be operated on by using the square bracket notation, for example: cl> splot y0cy0108t.c1h[8] If you don't specify a group in brackets, the task will take the first group. In order to use splot to analyze your HST spectrum, you will first need to write the wavelength information from your .c0h file to the header of your c1h files in WCS, using the mkmultispec task (see "Mkmultispec" on page 61). The splot task is complex with many available options (which are described in detail in the online help within IRAIF; type help splot). Table 2.14 summarizes some of the more useful capabilities of splot. When you are using splot, a log file is saved that contains results produced by the equivalent width or de-blending functions. To specify a file name for this log file, set the save_file parameter (see the online help). Figure 2.24: Using splot to Fit Gaussian to an Absorption Line Table 2.14: Capabilities of splot Task Keystroke Purpose Command ------------------------------------------------------------------------------ Manipulating spectra f Arithmetic mode; add and subtract spectra l Convert spectrum from Fv to Flambda n Convert spectrum from Flambda to Fv s Smooth with a boxcar u Define linear wavelength scale using two cursor markings Fitting spectra d Mark two continuum points & de-blend multiple Gaussian line profiles e Measure equivalent width by marking points around target line h Measure equivalent width assuming Gaussian profile k Mark two continuum points and fit a single Gaussian line profile m Compute the mean, rms, and S/N over marked region t Enter interactive curve fit function (usually used for continuum fitting) Displaying and redrawing spectra a Expand and autoscale data range between cursor positions b Set plot base level to zero c Clear all windowing and redraw full current spectrum r Redraw spectrum with current windowing w Window the graph x Etch-a-sketch mode; connects two cursor positions y Overplot standard star values from calibration file z Zoom graph by a factor of two in the x direction $ Switch between physical pixel coordinates and world coordinates Generalfile manipulation commands ? Display help g Get another spectrum i Write current spectrum to new or existing image q Quit and go on to next input spectrum ------------------------------------------------------------------------------ STSDAS fitting Package The STSDAS fitting package contains several powerful and flexible tasks for fitting and analyzing spectra. The ngaussfit and nfit1d tasks, in particular, are very good for interactively fitting multiple Gaussians and nonlinear functions, respectively, to spectral data. Unfortunately, these tasks do not yet recognize the "multispec" WCS method of storing dispersion information. They do recognize the simple sets of dispersion keywords such as W0, WPC and CRPIX, CRVAL, and CDELT, but these forms only apply to linear coordinate systems and therefore would require your data to first be resampled onto a linear wavelength scale before being used. Fortunately, these tasks will accept input from STSDAS tables, in which you can store the wavelength and flux data value pairs or wavelength, flux, error value triples. When using the ngaussfit and nfit1d tasks, it is necessary to specify initial guesses for the function coefficients before a fit can be computed. The initial guesses can be specified either via parameter settings in the task's psets or interactively once the task has started up. For example, let's say you want to fit several features using the ngaussfit task. Using the default parameter settings you can start the task using the following command line: fi> ngaussfit n4449.hhh linefits.tab This reads spectral data from the image n4449.hhh and stores the results of our line fits in the STSDAS table linefits.tab. At this point a plot of your spectrum should appear in a plot window and the task will be left in cursor input mode. You can use the standard IRAF cursor mode commands to re-window the plot to restrict the field of view to the region around a particular feature or features that you want to fit. It's also good at this point to: * Define a sample region (using the cursor mode S command) over which the fit will be computed so that fitting time wil"n t be wasted trying to fit the entire spectrum. * Define an initial guess for the baseline coefficients by placing the cursor at two baseline locations (one on either side of the feature to be fitted) using the B keystroke. * Use the R keystroke to redraw the screen and see the baseline that you've just defined. * Set the initial guesses for the Gaussian centers and heights by placing the cursor at the peak of each feature and typing P * Once you've marked all the features that you want to fit, press F to compute the fit. * The results will automatically be displayed. You can use the : show command to see the coefficient values. Note that when the ngaussfit task is used in this way (i.e., starting with all default values), the initial guess for the FWHM of the features will be set to a value of 1. Furthen-nore, this coefficient (as well as the coefficients defining the baseline) are by default held fixed during the computation of the fit, unless you explicitly tell the task through cursor colon commands to allow these coefficients to vary. Actually, it is sometimes best to leave these coefficients fixed during an initial fit, and then allow them to vary during a second iteration. This rule of thumb also applies to the setting of the errors parameter which controls whether or not the task will estimate error values for the derived coefficients. Because the process of error estimation is very CPU-intensive, it is most efficient to leave the error estimation turned off until you've got a good fit, and then turn the error estimation on for one last iteration. See the online help for details and a complete listing of cursor mode colon commands:type help cursor. Figure 2.25 shows the results of fitting the Hbeta (4861A) and [OIII] (4959 and 5007 A) emission features in the spectrum of NGC 4449. The resulting coefficients and error estimates (in parentheses) are shown in Figure 2.26. Figure 2.25: Fitting Hbeta and [OIII] Emission Features in NGC 4449 Figure 2.26: Coefficients and Error Estimates function = Gaussians coeff1 = 8.838438E-14 (0.) - Baseline zeropoint (fix) coeff2 = -1.435682E-17 (0.) - Baseline slope (fix) coeff3 = 1.854658E-14 (2.513048E-16) - Feature 1: amplitude (var) coeff4 = 4866.511 (0.03789007) - Feature 1: center (var) coeff5 = 5.725897 (0.0905327) - Feature 1: FWHM (var) coeff6 = 1.516265E-14 (2.740680E-16) - Feature 2: amplitude (var) coeff7 = 4963.262 (0.06048062) - Feature 2: center (var) coeff8 = 6.448922 (0.116878) - Feature 2: FWHM (var) coeff9 = 4.350271E-14 (2.903318E-16) - Feature 3: amplitude (var) coeff10 = 5011.731 (0.01856957) - Feature 3: center (var) coeff11 = 6.415922 (0.03769293) - Feature 3: FWHM (var) rms = 5.837914E-16 grow = 0. naverage = 1 low-reject = 0. high_reject = 0. niterate = 1 sample = 4800.132:5061.308 specfit The specfit task, in the STSDAS contrib package, is another powerful interactive facility for fitting a wide variety of emission and absorption line, and continuum models to a spectrum. This task was written by Gerard Kriss at Johns Hopkins University and because it is a contributed task little to no support is provided by the STSDAS group. There is extensive online help available, however, which should usually be sufficient to guide you through the task. Additional information is available in the Astronomical Data Analysis Software and Systems III, ASP Conference Series, Vol. 6 1, page 437, 1994. The input spectrum to specfit can be either an IRAF image file or an ASCII file with a simple three-column (wavelength, flux, and error) format. If the input file is an IRAF image, the wavelength scale is set using values of W0 and WPC or CRVAL1 and CDELT1. Hence, for image input, the spectral data must be on a linear wavelength scale. In order to retain data on a non-linear wavelength scale, it is necessary to use the ASCII file mode of input, where the wavelength values associated with each data value can be specified explicitly. The help file explains a few pieces of additional information that must be included as header lines in an input text file. By selecting a combination of functional forms for various components, you can fit complex spectra with multiple continuum components, blended emission and absorption lines, absorption edges, and extinction. Available functional forms include linear, powerlaw, broken powerlaw, blackbody, and optically thin recombination continua, various forms of Gaussian emission and absorption line and absorption edge models, Lorentzian line profiles, damped absorption line profiles, and mean galactic extinction. HSP Specific Tasks There are a few tasks in STSDAS that may be useful for conditioning and analyzing data from the High Speed Photometer. Table 2.15 summarizes these tasks. Table 2.15: HSP Tasks in STSDAS Task Function ------------------------------------------------------------------------------ parthitv Remove particle event. polyepoch Fit polynomials to coefficients as functions of the epoch. polyfit Fit a polynomial to a specified quantity posvel Calculate position and velocity vector of the spacecraft twodpolyfit Two-dimensional polynomial fit of temperature and epoch ------------------------------------------------------------------------------ Getting IRAF and STSDAS Both IRAF and STSDAS are provided free of charge to the astronomical community. You must have IRAF to run STSDAS. Detailed information about installing and retrieving STSDAS is found in the STSDAS Site Manager's Installation Guide and Reference. If you have any problems getting and installing STSDAS, TABLES, or any other packages or data described in this handbook, please contact the Help Desk by sending e-mail to: help@stsci.edu. A description of how to install the synphot data files is provided on page 7 1. Retrieving the IRAF and STSDAS Software If you already have IRAF and STSDAS installed on your system, you should skip this section and look at "Setting Up IRAF" on page 14. There are three ways to get the software: * Use the World Wide Web. * Use anonymous FrP. * Request a tape. World Wide Web When you access the STSDAS World Wide Web page, you will see an option for getting the software. Links and instructions are provided to download the appropriate files to your local system or to display the software directory, from which you can select the series of smaller files. The STSDAS web page is available at the following URL: http://ra.stsci.edu/STSDAS.html Anonymous FTP * IRAF: iraf.noao.edu (140.252.1.1) * STSDAS: ftp.stsci.edu (130.167.1.2) There are two points to remember when using FIP to retrieve STSDAS: * You must retrieve and install the TABLES package before STSDAS. * You should retrieve the README file from the directory /software/ stsdas/v1.3 and read it to find out which files you should retrieve. You must have IRAF installed on your system to install TABLES and STSDAS. When you retrieve STSDAS, you must also retrieve the TABLES package, and TABLES must be installed first. Instructions for installing STSDAS are available in the doc subdirectory of the directory where you find STSDAS. The complete instructions for installing STSDAS, TABLES, and all of the supporting software and reference files (including instrument reference files and the synphot dataset) are found in the STSDAS Site Manager's Installation Guide and Reference. Requesting Tapes You can ask to have the software shipped to you on magnetic tape in a variety of formats. To do so, you will need to contact the Help Desk by sending e-mail to help@stsci.edu. The Help Desk staff will send you an ASCII text version of the STSDAS Software Request Form, which you can then complete and return via e-mail. The software can also be registered and requested using on-line forms available through World Wide Web at the following URL: http://ra.stsci.edu/RegistForm.html When you request the STSDAS software, you can also ask for the appropriate version of IRAF, which will be requested for you--simply check the appropriate box on the form under "Do You Already Have IRAF Installed?" If you prefer to request the IRAF software independent of STSDAS, you can do so by sending e-mail to: iraf@iraf.noao.edu Synphot Dataset This manual sometimes refers to the synphot dataset, which must be available in order to run tasks in the STSDAS synphot package. These data files are not included with the STSDAS software and must be retrieved independently. To do this, you need to retrieve a series of compressed tar files from the STEIS directory software/stsdas/refdata/synphot. After uncompressing and extracting the tar files (as was described in the previous sections), you need to unpack the FITS files as described below. The synthetic photometry data are read in similar way as the instrument datasets, using the script unpack. cl provided in the top directory. This script is run within IRAF to convert data from FITS format into the format used by the synphot task. This script assumes you have the logical crrefer set up in your extern.pkg file (which is in the directory $iraf/unix/hlib (Unix) or $iraf/vms/hlib (VMS)) or have it set up in your session. You do this by placing the command below in extern.pkg or by typing it on the command line: set crrefer = "/node/partition/stdata/synphot/" Figure 2.27 shows how to convert the files. Figure 2.27: Unpacking Synthetic Photometry Files > cl cl> cd /node/partition/stdata/synphot cl> set crrefer = "/node/partition/stsdata/synphot/" cl> task $unpack = unpack.cl cl> tables ta> fitsio fi> unpack Note that all three synphot files must be unloaded for the script to complete successfully. References Available from STSci * STSDAS Users Guide, version 1.3.3, September 1994. * STSDAS Installation and Site Managers Guide, version 1.3.3, March 1995 * Synphot Users Guide, version 1.3.3, March 1995. * IGI Reference Manual, version 1.3, October 1992. Available from NOAO * A Beginners Guide to Using IRAF, 1994, J. Bames. * User Manual for SAOimage, 199 1, M. Van Hilst. * Photometry Using IRAF, 1994, L. Wells. * A User's Guide to Stellar CCD Photometry with IRAF, 1992, P. Massey and L. Davis. Other References Cited in This Chapter * Home, K., 1988, in New Directions in Spectrophotometry, A.G.D. Philip, D.S. Hayes, and S.J. Adelman, eds., L. Davis Press, Schenectady NY, p. 145. * Koorneef, J., R. Bohlin, R. Buser, K. Home, and D. Tumshek, 1986, in Highlights of Astronomy, Vol. 7, J.-P. Swinds, ed., Reidel, Dordrecht, p. 833. * Kriss, G., 1994, in Astronomical Data Analysis Software and Systems III, PASP Conference Series, Vol. 61, p. 437. ------------------------------------------------------------------------------ CHAPTER 3 Data Tapes and File Structures In This Chapter... Tape Log and Contents Printouts of Files Reading HST Data Tapes Data Files Within a few weeks of your HST observations STScI will mail to you one or more tapes containing your data. If you have just received your shipment of HST observation data, you will find that your package includes: * At least one data tape, together with a tape log listing its contents. * Several printouts. This chapter explains how to use the tape log to understand what kind of files are on the tape, how to understand what the hardcopy printouts represent, and how to actually read a tape. A brief outline of the file structure is given as well. If you are an archival user, you should also refer to Chapter 4 for information about data retrieval and Archive structure. The printouts that you receive may be substantially different from those described here. Institute staff are redesigning the paper products to make them more complete and self-explanatory. This document describes the documents as they were in December 1995. Tape Log and Contents With your tape you receive a tape log which lists the contents of your tape. A sample tape log is reproduced in Figure 3.1. There are at least three types of files on any tape; these files will be described in more detail in the following sections: * An ASCII trailerfile (page 78), written as a FITS-fonTiat ASCII table. * A series of FITS-format data files which comprise your HST data (page 78). * A post-observation data quality (PDQ) file (page 79), written as a FITS-format ASCII table. * An OMS jitter image file and jitter table. Tapes containing data from Faint Object Spectrograph (FOS) and Goddard High Resolution Spectrograph (GHRS) observations will include additional FITS-format tables and images containing the reference files used by the Routine Science Data Processing (RSDP) pipeline to calibrate the data. Tape Log The tape log should list, for each observation, a series of data files including the trailer file (identified by the extension trl). All files that have the same rootname (e.g., u2ri3101t) belong to a single observation. Jitter image files have the same first 8 characters, but the ninth character is a "j". Although this example shows a WF/PC- I tape log, it illustrates the relevant features of tape logs provided with any HST dataset. The first letter of the rootname of the dataset indicates the instrument with which the data were observed: * FOC data begin with "x". * FOS data begin with "y". * FGS data have rootnames beginning with "f". * GHRS data begin with "z". * HSP data begin with "v". * WF/PC-1 data begin with "w". * WFPC2 data begin with "u". The example in Figure 3.1 lists the files for two Wide Field Planetary Camera 2 (WFPC2) datasets, each containing twelve files. Figure 3.1: Sample Tape Log Trailer Files The trailer files are ASCII files that contain the log of the processing of your data by the Post-Observation Data Processing System (PODPS) pipeline. The trailer files can be printed after reading from tape. Note that the trailer file is formatted with 132 columns. FITS-Format Data Files Each image is written to tape as a single FITS file which appears as a single entry in the tape log. This file has the nine character rootname uniquely identifying the observation, followed by a three character extension. For FITS images, the last letter in the extension is "f" which distinguishes the FITS-format file from GEIS (Generic Edited Information Set) files, which contain a separate header and data file for each image. The extension is assigned according to the type of data file. This image is a normal FITS format file that is readable by any FITS reader. A description of the FITS format and the various options and parameters that can be used in the FITS standard can be found in the document "Implementation of the Flexible Image Transport System (FITS)," by the NASA/OSSA Office of Standards and Technology. The document is available via FIP to nssdca.gsfc.nasa.gov in the directory FITS. A listing of FITS standards and documentation is available via the World Wide Web from NRAO. A file with a nine-character rootname followed by "_cvt." for "converted", to distinguish it from GEIS (Generic Edited Information Set) format, followed by a three-character extension. The extension is assigned according to the type of data file. This image is a normal FITS format file that is readable by any FITS reader. In the HST default FITS format, HST group data are stored in an extra dimension of the FITS file. HST data are written to tape in this format in order to keep data constituting a single observation together in one file (e.g., four Wide Field (WF) or Planetary Camera (PC) charge-coupled device (CCD) frames; FOS spectra from ACCUM mode, etc.). The GCOU field on the tape log gives the number of groups associated with an image. (Group format is explained in more detail on page 86). The HST default FITS format is to use IEEE format with all groups written to a single FITS file and the group parameters written to an ASCII table. Type "help fitsio opt=sys" from within IRAF for details. The DIMENS and BITP (bits per pixel) fields can be used to determine the size of each file. For reference, Table 3.1 below gives the typical sizes (and range of possible sizes) of single files and complete datasets for each instrument. Table 3.1: Average Instrument File and Dataset Sizes Instrument Mode Single File Dataset Size Size ------------------------------------------------------------------------------ FOC Normal 512 x 512 1 MB ~4 MB Zoom 1024 x 512 2 MB 8-10MB FOS Normal 20-80 KB < 500 KB Rapid readout > 4 MB > 10 MB FGS POS and Trans 2 MB 6 MB GHRS Normal 20-80 KB < 500 KB Rapid readout > 4 MB > 10 MB HSP Normal ~40 KB ~500 KB Occasional > 125 MB > 1 GB WFIPC-1 and Normal (full mode) 10 MB 26 MB WFPC2 Area mode 2.5 MB 7 MB ------------------------------------------------------------------------------ Printouts of Files In addition to the trailer files provided on tape, you will receive hardcopies of files that will help you assess the quality of your data. You will also receive a hardcopy plot of a spectrum or photographic image from each observation. Hardcopies may also be provided for: * PDQ (PODPS Data Quality) files. * OCX (OSS Observer Comment) files. These files may also be retrieved from the HST Archive (see Chapter 4). PDQ Files The Post Observation Summary and Data Quality Comment files-PDQ files-contain predicted as well as actual observation parameters extracted from the standard header and science headers. They also contain a comment on any obvious features in the spectrum or image, as noted by the OPUS data assessor, or information about problems or oddities encountered during the observation or data processing. A sample PDQ file is reproduced in Figure 3.2. In this example, note the comments at the top, and the reduced exposure time. Figure 3.2: Sample PDQ File OCX Files The Observer Comment Files-OCXfiles-are produced by the Observation Support System (OSS) personnel. These files are not created for every observation but, when available, they contain updated mission information obtained at the time the observation was executed, and OSS keywords and comments. Prior to April 17, 1992, OCX files were not always archived separately and, in some cases, were prepended to the trailer file (which is on your tape). After early February 1995, OCX files were produced only when an observation was used to locate the target for an Interactive Target Acquisition. At this time, mission and spacecraft information (like that shown in Figure 3.3) were moved to the PDQ reports and the Observation Logs (OMS jitter image and jitter table). Figure 3.3 is a pre-1995 sample OCX file: note the comments at the end. Figure 3.3: Sample OCX File ------------------------------------------------------------------------------ ***************************************************************************** ** * This is the data quality evaluation report from OSS operations personnel * ** ***************************************************************************** Faint object Camera (FOC) Script for Checking Quality/Utility of Received Science Data observation: Object name: Executed (UT): Received (UT): X0KE0101A NGC4486 91.173/ 22:53:44 22-JUN-1991 23:19 FOC Specific Information ------------------------ Expected Received Science Data Mode: image image Calibration Mode: _ _ Camera Used: F/96 f/96 Data Format: 512 x 1024 512 X 1024 PCS Mode: FINELOK finelock File Information ---------------- DOD DOH SHD SHH ULD ULH Expected: Y Y Y Y Y Y Received: Y Y Y Y Y Y Group Count: 1 1 1 Data file lost? No Data sent to [DUMP]? No In correct directory? Yes Data Information ---------------- Expected exposure time: 1500.000 Sec Expected filters: FW1: CLEAR FW2: CLEAR FW3: CLEAR FW4: F372M Expected calibration lamp status: NONE ----------------------------------------------------------------------- | Grp | Data | Defects/ | Int. Level | Background | S/N | Focus | # | Dropout | Anomalies * | of Target | Int. Level | Est. | | ----------------------------------------------------------------------- | 1 | n | n | 239 | 15 | good | _ | ----------------------------------------------------------------------- * explain in comments Coronagraphic finger used? 0.8" n 0.4" n Is the bright object behind the finger? n Distant Moving Objects: Field targeted? n Any suspicious trails? n Comments: Target is M87.OSS calculated and sent these offsets: (0.157, -1.012). Exposure was disrupted by loss of the take data flag. Exposure started five times with a total exposure of 603 seconds instead of the scheduled 1500. Highest intensity was actually around 275 due to a count rollover; (256+19=275). ------------------------------------------------------------------------------ Reading HST Data Tapes In the previous section we described what you should have received with your shipment of HST data. In this section we expand on that, describing how you can use the tasks in STSDAS to actually read the tape. The data on your tape are in FITS format. In STSDAS, you will read the data onto disk using the strfits task. This will create a series of GEIS files. This section describes: * The general process of reading a tape. * Preliminary steps, such as mounting the tape and allocating a drive. * The STSDAS commands to read the tape. The STSDAS strfits FITS reader preserves the multigroup format of an HST image. This format must be retained if you plan to recalibrate your data in STSDAS. To read an HST data tape, you need to: 1. Start IRAF and load the stsdas and fitsio packages. 2. Mount the tape. 3. Set global parameters. 4. Set the strfits parameters and read the tape. Loading Packages Go to your IRAF home directory and start IRAF by typing: cl This will start an IRAF session. Software in IRAF is organized into packages. To load a package, type its name. Once you are in IRAF, load the stsdas and fitsio packages as shown in Figure 3.4. Figure 3.4: Loading Packages in STSDAS The prompt (such as fi>) shows the first two letters of the most recently loaded package. The fitsio package contains tasks for handling the FITS format files used for HST images. You can use catfits to produce a listing of the contents of your tape, and strfits to read the data onto disk. When you are done working with your data, you may choose to write it back out to tape using stwfits. Mounting the Tape Mount the tape on your tape drive. Allocate the device within IRAF by typing: fi> allocate device where device is the IRAF name of the tape drive. If you are not sure how to mount tapes or don't know the IRAF names that match your tape drives, see your local system administrator for help. Setting Global Parameters Set imtype to specify that the files are to be read in as GEIS format, for example: fi> set imtype="hhh" Go to the directory in which you want the read-in files to be stored. For example: fi> cd /nem/data1/hstdata Using strfits Like most IRAF and STSDAS tasks, strfits has several parameters that control the task's behavior. To edit the parameters, use the epar task fi> epar strfits In strfits you should set: * fits_file to the IRAF name of your tape drive. * file_list to specify the files to be read off tape. * xdimtogf = yes to specify that a multigroup GEIS file be created. * oldirafname = yes to restore the original names of files. When you are finished editing the parameters, type :go from within epar to automatically run strfits. Or type :q to return to the IRAF prompt. From the IRAF prompt you can type strfits to run the program. Be sure to set older name and xdimtogf to "yes" or else the tape will not be read correctly and you will not be able to manipulate the data. This is vital if you plan to recalibrate your data. You can display the current values of the strfits parameters by typing lpar strfits, as shown in Figure 3.5. Figure 3.5: Displaying Parameter Values with lpar fi> lpar strfit fits_file = "mta" FITS data source file_list = "" File list iraf_file = "" IIAF filename (templet = "") template filename (long_header = no) Print FITS header cards? (short_header = yes) Print short header? (datatype = "default") IRAF data type (blank = 0.) Blank value (scale = yes) Scale the data? (xdimtogf = yes) Transform xdim FITS to multigroup? (oldirafname = yes) Use old IRAF name in place of iraf-file? (offset = 0) Tape file offset (Mode = "ql") fi> In this example, the tape drive is "mta" (identified by the fitsfile parameter) and files 1 through 999 (or end of tape, whichever occurs first) will be read from the tape and restored using their original file names (because the oldirafname parameter is set to "yes"). The parameter xdimtogf is set to "yes," meaning that a multigroup GEIS file will be written. Once your data are read in, you can deallocate your tape drive by typing: fi> deallocate device Data Files Once you have read your tape and gotten a directory listing of your files, you will notice that you have many more files than you have HST exposures (observations). This section explains how to: * Identify and understand the nature of each of the data files from an HST exposure. * Learn about the observational parameters of each exposure by examining the header keywords. If you used strfits to read your data tape as we described on page 83, your data will now be in GEIS format. If you do a directory listing (type dir within IRAF), you will find many files, all of which have a nine-character rootname and a three-character extension. Header and Data Files Files whose extensions end with the letter "h" (e.g., w01o0105t.c1h) are ASCII header files. The header files contain keywords that describe the parameters used to take the observation, the processing of the data, and the properties of the image. Files whose extensions end in the letter "d" (e.g., wOlo0105t.cld) are binary datafiles. A single GEIS image is composed of a header and data pair (e.g., the files wOlo0105t.c1h and wOlo0105t.cld together represent a single image). Rootnames and Datasets By definition, a dataset is the collection of all files produced by the Routine Science Data Processing (RSDP) pipeline for a single HST exposure. The files in a dataset all share the same nine-character rootname or IPPPSSOOT As defined in Table 3.2, the rootname follows a specific naming convention that allows each executed observation to be uniquely tied to the scheduling information from which it originated. Table 3.2: IPPPSSOOT Root File Names Character Meaning I Instrument used, will be one of. U - Wide Field/Planetary Camera-2 V - High Speed Photometer W - Wide Field/Planetary Camera X - Faint Object Camera Y - Faint Object Spectrograph Z - High Resolution Spectrograph E - Engineering data F - Fine Guidance Sensors H-N - Reserved for future instruments 0 - Intermediate product files S - Engineering subset data T - Guide star position data ppp Program ID; can be any combination of letters or numbers (46,656 combinations possible). There is a unique association between program ID and proposal ID. SS Observation set ID; any combination of letters or numbers (1,296 possible combinations). OO Observation ID; any combination of letters or numbers (1,296 possible combinations). T Source of transmission (RSDP environment) R - Real time (not recorded) T - Tape recorded M - Merged real time and tape recorded N - Retransmitted merged real time and tape recorded 0 - Retransmitted real time P - Retransmitted tape recorded File Extensions Each file in a dataset has a three-character extension. For each instrument, this extension uniquely identifies the file contents. Examples of file extensions are .q0h and q0d. File extensions ending in "h" are image header files in ASCII format and file extensions ending in "d" are actual images in binary format. In this specific example, "q" indicates the data quality file for raw science data. Since the meaning of file extensions varies by instrument, their description is deferred to the individual instrument chapters. Group Data The data from a single HST observation are often composed of multiple images or spectra. For example, a single WF/PC-1 exposure is obtained as four images (one image for each CCD chip). Likewise, the FOS and GHRS obtain data in a time-resolved fashion so that a single FOS or GHRS exposure is composed of many spectra-one corresponding to each readout. GEIS files use group format to keep all of the data from a given HST exposure together in a single image file. The data corresponding to each sub-image (for the WF/PC-1 or WFPC2) or each sub-integration (for the FOS or GHRS) are stored sequentially in the groups of a single GEIS image. The header file for an image contains the information that applies to the observation as a whole (i.e., to all the groups in the image). The group-specific keyword information is stored with the group data itself in the binary data file. The number of groups associated with each observation varies with the instrument configuration, observing mode, and observing parameters. In Table 3.3 we list, for the most commonly-used modes of each instrument, the contents, and where unambiguous, number of groups in the final calibrated image. Table 3.3: Groups in Calibrated Images, by Instrument and Mode Number Instrument Mode of Description Groups ------------------------------------------------------------------------------ FGS All 17 FGS data are not reduced with IRAF and STSDAS. Therefore, FGS groups have different meaning than for the other instruments. FOC All 1 All FOC images have only a single group. FOS ACCUM n Group n contains accumulated counts from groups (subintegrations) 1, 2, ... n. The last group is the full exposure. RAPID n Each group is an independent subintegration with exposure time given by group parameter EXPOSURE. HSP All 1 HSP datasets always have only a single group that represents either digital star ( .d0h, .c0h), digital sky (.d1h,.c1h), analog star (.d2h,.c2h), or analog sky (.d3h,.c3h). GHRS ACCUM n Each group is an independent subintegration with exposure time given by group parameter EXPOSURE. If FP-SPLIT mode was used, the groups will be shifted in wavelength space. The independent subintegrations should be coadded prior to analysis (see "Coadding Spectra" on page 55). RAPID n Each group is a separate subintegration with exposure time given by group parameter EXPOSURE. WF/PC-1 WF 4 Group n represents CCD chip n, e.g., group 1 is chip 1 (unless all chips were not used). Group parameter DETECTOR always gives chip used. PC 4 Group n is chip n + 4, e.g., group 1 is chip 5. If all chips were not used, see the DETECTOR parameter which always gives the chip used. WFPC2 All 4 Planetary chip is group 1, detector 1. Wide Field chips are groups 2-4 for detectors 2-4. If all chips were not used, see the DETECTOR keyword. ------------------------------------------------------------------------------ At this point it may be a good idea to verify that you received all the scientific data that you were expecting. We suggest you use the latest version of your Phase II proposal to verify that the tape contains all data you expected to receive. The individual instrument chapters give examples for how to relate proposal exposure logsheet lines to data files. CHAPTER 4: Getting Data From the Archive In This Chapter... Getting Started Using the HST Archives StarView Tutorial Tutorial: Retrieving Calibration Reference Files STScI maintains an Archive of all HST data and associated calibration files. The Archive includes a large database describing the observations that have been made with the HST and the files contained in the Archive. An interface to the Archive, StarView, allows you to search the Archive and retrieve data from it. In this chapter we will explain how to access and use the HST Data Archive. In the first section we describe how to get an account and access the Archive host computers at STScI. In the second section we work through a sample StarView session illustrating basic techniques, from login through retrieval and logout. The last section is a tutorial showing how to find and retrieve calibration reference files. Getting Started Using the HST Archives In this section we describe how to access the HST Archive through the host computers and how to register for a retrieval account. Accessing the Archive Hosts STScI has set up two Archive host computers for external access of the HST Archive: stdata.stsci.edu (VMS), and stdatu.stsci.edu (Unix). To get started using the HST Archive, external users should use telnet to connect to one of these host computers and log in with username guest and password archive. Once you're in as guest, you can use the Archive user interface, StarView, to peruse the database. Simply type starview from the command line for the terminal version or xstarview for the X-Windows version. European users should generally use the ST-ECF Archive system. Telnet to stesis.hq.eso.org and log in with the user name starcat or contact catalog@eso.org for help. Canadian users should request archival data through the CADC; contact cadc@dao.nrc.ca for help. The user interface at ST-ECF and CADC is STARCAT, not StarView. To retrieve data, you'll need a registered account. The simplest way to register is through our World Wide Web form at the following URL: http://stdatu.stsci.edu/registration.html You may also register by typing the register command while logged onto one of the Archive host machines. This puts you in an editor on a registration form. After you have filled out the form, exit the editor, and the form will be mailed to the Archive support hotseat. Your retrieval account will be activated within two working days; you will get e-mail when it is ready. Once you have registered, you can use StarView to retrieve non-proprietary images and spectra. You can also access the Archive computers via the World-Wide Web through the STScI Home Page (see "World Wide Web" on page 5). General Observers and Guaranteed Time Observers normally have exclusive rights to their HST data for one-year. However, all observations obtained under calibration proposals are immediately non-proprietary. Data retrieved through StarView will appear in the data directory of the Archive host computer. You will receive an e-mail message when the retrieval request has been queued, and another when it has been completed. You can then use anonymous FTP to transfer the files to your home computer. While logged onto the Archive host computers, you can type home to return to your login directory, or docs to a directory of documents and manuals, including PostScript versions of the HSTArchive Manual and the HSTArchive Primer. You can retrieve PostScript copies of the manuals using FTP (the FTP session will have to be started on your home machine, since neither stdata nor stdatu will allow outgoing FTP or telnet sessions). Alternatively, you can contact the Archive hotseat and ask for a hardcopy to be sent to you. See the HST Archive's World Wide Web page for more information. If you have any questions, direct them via e-mail to archive@stsci.edu, or phone (410) 338-4547. The Archive web page is at URL: http://www.stsci.edu/archive.html A separate help desk (archive@stsci.edu) exists to provide technical assistance in using the HST Archive. StarView Tutorial In this section, a whole StarView session is presented, from logging in to the host computers through using FTP to move the data from the Archive host to your home computer. A second tutorial on selecting and retrieving calibration reference files is provided on page 47 1. StarView is available in two forms: * An X-Windows based version. * A terminal version for basic terminals, such as a VT1OO. In this manual we describe the X-Windows version but we also provide the information needed to run the terminal version. The X-Windows version of StarView is available as distributed software which you install and run on your home computer. For information about how to install xstarview on your home system, contact the Archive hotseat. Alternatively, you could use telnet to login to the stdatu host using the guest account. By running the xstarview client on your own computer, you avoid the overhead of running the software over the network, giving you a considerable speed improvement. If you need to use telnet, here's how to do so. > telnet stdatu.stsci.edu Connected to stdatu.stsci.edu. Escape character is `^]`. Login: guest Password: archive After logging in (either by running the xstarview client on your computer or by using telnet to access stdatu), some introductory messages will appear on your screen. To see more of the text, press space. To quit, press Q. Start the X-Windows version of StarView by typing: > xstarview In the X-Windows version you will be asked for your X display host name. You should respond with the name of your home workstation. You will then be instructed to add stdatu to your computer's xhost file by typing the following line in another window and then pressing Return to continue: > xhost +stdatu.stsci.edu If you want to use the terminal version instead, you would type: > starview In the terminal version, you will be asked to confirm your terminal setup. For example: xterm 24 x 80 [Y]: If this is correct, press Return to continue. If this is not correct, then answer "no" by pressing N followed by Return. If you answer "no", you will then be asked some questions about your terminal type, number of lines, and number of columns. Type a question mark ? to get help about your options. The StarView session is then started; messages will be displayed telling you what is happening (e.g., data dictionaries being loaded). This process may take a minute or two to complete. Welcome Screen The StarView screen (Figure 4.1) will appear (this and all subsequent screens are taken from the X-Windows version). If there is any urgent news (e.g., a message about possible system downtime), it will appear at the top of the welcome text. Figure 4.1: Welcome Screen You can scroll through the text and read any additional information below the display area by using the scroll bar on the X-windows version of StarView. On the terminal version (for VT100 or other basic terminals), use the arrow keys or page up by pressing Control-V and page down by pressing. Control-P. Command Usage and Screen Interaction In the X-windows version of StarView: * Use the mouse to select all functions. * Choose options by positioning the mouse pointer over the command button or menu and pressing the left mouse button. In the terminal version: * Press control-M to cycle through the three screen areas (menu, work area, and command box). * Use the arrow keys to move around within any one portion of the screen. * Whenever an option is highlighted, press Return to invoke the highlighted function. You can also use the command accelerators to invoke functions (i.e., run commands). Some command buttons show accelerators such as "^N", which means the function or command can be invoked by pressing down on the Control key while simultaneously pressing the N key. Other commands show accelerators such as "E+n", which means that you would press the Esc (escape) key followed by N. Searching the Catalog To search the catalog: 1 .Choose a search screen. 2. Specify your search criteria, such as position and a release date before today's date so that you get public data. 3. Start the search by clicking on the [ Begin Search ] button. 4. Step through subsequent found observations looking for those of interest to you. In this example we use the screen to search the HST catalog. The screen is useful for most basic searches of the HST catalog. An extensive set of more detailed search screens are also available. To choose one of these, click on [ Other Searches ] or pull down the | Searches | menu. The Screen Choose the screen by clicking the [ Quick Search ] button. (In xstarview, clicking is done by moving the cursor to the command button and pressing the left mouse button. In the terminal version, either (1) use Control-T to move the cursor to the command box, and the arrow keys to move the cursor until the button is highlighted then press Return or (2) press Control-J, the accelerator for the command.) The screen is shown in Figure 4.2. We will use this screen to search for all publicly available WFPC2 observations of M87. Figure 4.2: Quick Search Screen Specifying Search Criteria There are various ways to search for observations of a particular target in the catalog. The easiest way is to enter the name (which you should embed in wildcard characters, e.g., *mars*) in the target field. Because observers do not necessarily use the same convention to name sources, this will typically not return all observations of a given source. The best way to be certain you retrieve all observations of a given target (for stationary targets) is to search for observations within a given (radial) distance of your source's position by entering constraints in the "RA", "Dec", and "Search radius" fields on the screen. If you do not know the RA and Dec of your target, you can run either the SIMBAD or NED target name resolver from within StarView. Each resolver automatically determines the target's position using a network connection to either the SIMBAD database in Europe or the NASA Extragalactic Database (NED) in California. It then populates the RA and Dec fields on the search screen with this information. Click on the [ Get Coordinates ] button to use the SIMBAD resolver; to use NED, pull down the | Options | menu, select User Defaults, and change the "Coordinates lookup server" field to NED. In this case, we know the coordinates of M87, so we enter the RA and Dec for M87 and a search radius of 5 arcminutes in the corresponding fields on the search screen. We want WFPC2 observations, so move to the "Instrument" field. The valid HST instruments are: * Faint Object Camera (FOC). i * Faint Object Spectrograph (FOS). * Fine Guidance Sensors (FGS). * Goddard High Resolution Spectrograph (HRS). * High Speed Photometer (HSP). * Wide Field Planetary Camera (WFPC). * Wide Field Planetary Camera 2 (WFPC2). To get help on the valid ranges for any field, use the field help. In xstarview, move the cursor to the field and press the right mouse button (or press the Help button, often located in the bottom left corner of your keyboard). In the terminal version, move the cursor to the field and press Control-H. Enter WFPC2 in the instrument field. (To find observations from more than one instrument, use a comma-separated list; e.g., WFPC2, WFPC, FOC.) We want public data, so now specify that we want data released prior to today's date. For example, move to the "Release date" field and enter screen looks at this point. Figure 4.3: Quick Search Screen With Constraints Entered Use the [ Strategy ] button to get help using any StarView screen, or the pull down | Help | in the menu bar to see all the available StarView help. Starting the Search Click on the [ Begin Search ] button to search the catalog for the observations satisfying your search criteria. If none are found, a message will appear at the bottom of the screen, and you will need to enter different search constraints. If at least one observation is found, the screen will change to the screen. The screen (Figure 4.4) shows the results of your catalog search. The first record that matches your search criteria will be displayed. Figure 4.4: Quick Search Results Screen With Record Display Viewing Subsequent Found Observations If you want to scan the full list of your search results: * Click the [ Step Forward ] button to view one record at a time. * Click [ Scan Forward ] to see all of the found records in rapid succession. Press any key to stop the scan. * To go back to previous records, use [ Step Back ] or [ Scan Back ]. Another way to view your search results is to use StarView's screen. Click on [ View Result as Table ] to see several catalog records at the same time (see Figure 4.5). Click on [ View Result as Form ] to return to the single-record screen format. Figure 4.5: Quick Search Results Displayed on the Table Format Screen Use the [ Preview ] button to get a quick look at the data. This can help you decide whether or not to retrieve a dataset. Preview displays compressed HST images (not suitable for science analysis), as well as FOS or GHRS spectra. Only public data are available for preview. Preview is not available in the terminal version. Retrieving Datasets From the Archive We now want to retrieve some of the data that we have identified in the catalog. The steps in this process are: 1. Mark the observations that you want to retrieve; you can mark them either individually or as a group. 2. Display and review the list of datasets to be retrieved. 3. Specify the file formats and media to be used in the retrieval process. 4. Submit the request. 5. Check the request status, if desired. Marking Observations for Retrieval To mark for retrieval the dataset displayed on the screen, click the [Mark Dataset ] button. This action will be confirmed by a message at the bottom of the screen. Also, the "Marked" field, in the upper right corner of the screen, will display "T" (true) indicating that the dataset has been marked for retrieval. You can mark datasets for retrieval in either the table-row format display screen, in which case the highlighted record is marked, or on the screen with the record displayed. If you want to mark for retrieval all of the records matching your search criteria, click on the [ Mark All ] button. This could be a large volume of data, and it would be for the M87 search request described here. Alternatively, step through your search results records by clicking on the [ Step Forward ] button and only click on the [ Mark Dataset ] button for a few of the observations. Reviewing the Retrieval Request Once you have marked records for retrieval, you begin the retrieval process by displaying and reviewing a list of datasets to be retrieved. To do this: 1. Click on the [ Retrieve Marked Data ] button to exit the screen and to begin the retrieval process by bringing up the screen. 2. Review the list of datasets. The screen lists all of the datasets that you have marked for retrieval. In this case, you would see something like Figure 4.6. Figure 4.6: Archive Retrieval Screen Review the list of datasets that you have marked for retrieval. If you have marked several datasets, you may need to click on the [ Next Page of Datasets ] button to see additional screens of marked records. The total number of datasets that you have marked for retrieval is shown near the bottom of the screen. Specifying Formats and Media 1. Continue with the data retrieval process by clicking the [ Submit Request ] button. 2. Specify the files that you want to retrieve. 3. Specify the type of media (file transfer method) that you want. When you click the [ Submit Request ] button, the screen is displayed (Figure 4.7). Figure 4.7: Retrieval Request - File Options Screen The screen indicates the kinds of files that will be retrieved, in this case the final calibrated science data files and the data quality report files will be retrieved. These defaults are acceptable. Click the Submit Request ] button to continue with the retrieval process. The screen is then displayed (Figure 4.8). You will need to enter your Archive user name and password, pressing Return after each entry. If you do not have an Archive account, then you will not be able to retrieve data until you have registered with STScI. For information about how to register, see "Accessing the Archive Hosts" on page 89. Figure 4.8: Retrieval Request- Media Options Screen The screen will indicate HOST (retrieve to Archive host staging disk) as the default distribution medium. This is acceptable--your data will be retrieved to a subdirectory of the data disk on stdatu. You will then be able to FTP it electronically from there to your home computer, as described in "Getting Your Data" on page 103. You may also use this screen to specify that your data be put onto a tape and, mailed to you. This option is especially useful for large data requests. Submit the Request Click on the [ Submit Request ] button to begin the submission process. StarView will validate your Archive account information and send your retrieve request to the Archive system. At this point xstarview will want to interact with you using a special xterm window which it will start up. Look at that window and respond to any xstarview requests that appear there. The list of datasets you have requested will be saved in a file named after the date and time of the request, with an extension of req. The name of this file will be displayed in the xterm window. Figure 4.9 shows how a StarView screen might look at this stage. Figure 4.9: Retrieval System Messages. Press Return to exit from the retrieval process and to return to the StarView screen from which you initiated the retrieval request. Shortly after your request has been submitted, you will receive an e-mail message telling you that your request has been accepted and queued by the Archive system, and will also give you the request ID. You can use the request ID later to check the status of your request, and also to locate your data on the Archive host's staging disk after it has been retrieved. Checking Request Status You can check the status of your retrieval request, to do this: 1. Click on the [ Retrieval Status ] button from within the | Retrieve | menu on most StarView search screens, or click on the | Commands | menu from the screen. 2. You will be asked to enter your request ID (this will be e-mailed to you shortly after your request is submitted). Type the request ID. 3. Press Return to continue with your StarView session. Figure 4.10 shows a sample retrieval status screen. Figure 4.10: Sample Retrieval Status Screen Exiting StarView You can now either continue working in StarView, or you can exit. You could also logout of the Archive host altogether and wait for the mail message that will tell you that the files have been retrieved and are ready for you. This message will go to the e-mail account that you identified when you registered for an Archive account. Press Control-X to exit StarView. A dialog box will appear asking you to confirm that you really want to exit. Click [ OK ] to exit. Getting Your Data After your data have been retrieved from the Archive, you will receive a second e-mail notification. You may then transfer the data back to your home site via anonymous FTP. Figure 4.11 shows a sample FTP session. Figure 4.11: Retrieving Files Using FTP > ftp stdatu.stsci.edu Connected to stdatu.stsci.edu. 220 stdatu.stsci.edu FTP server (Version 5.86) ready. Name (stdatu.stsci.edu): anonymous Password: Type your e-mail address . . ftp> cd tdk7992 250 CWD command successful. ftp> binary 200 Type set to I. ftp> prompt Interactive mode off. ftp> mget u*.fit 200 PORT command successful. 50 opening BINARY mode data connection for u2900101t_cOf.fit 226 Transfer complete . . ftp> bye 221 Goodbye. Don't forget to set the FTP transfer type to "binary" before transferring the files. You are now ready to begin analyzing your dataset. Note that the data are in FITS format and may be converted to STSDAS format using the task strfits. Remember, if you have any problems or questions, contact the Archive hotseat at archive@stsci.edu. Tutorial: Retrieving Calibration Reference Files StarView provides calibration reference file screens for each instrument. These screens let you see which calibration files and tables were used to calibrate a given dataset by the PODPS pipeline, and which files and tables are currently recommended. You can mark either the used or the recommended reference files and tables for retrieval and retrieve them through StarView. If you already know the name of the calibration reference file or table you wish to retrieve (e.g., if you have determined the file name from a listing on STEIS), then you can retrieve just that file by using the [ Add Datasets by Name ] function on the screen. This is described on page 106. In this example, we use a StarView reference file screen to retrieve both the "used" and "recommended" calibration files for an M87 dataset. 1. Start StarView as described in "StarView Tutorial" on page 91. This will bring up the screen (Figure 4.3). 2. Click the [ Other Searches ] button. (We want to find the search screen for calibration reference files.) The [ Other Searches ] screen will be displayed (Figure 4.12), with the cursor highlighting "Quick Search". Figure 4.12: Other Searches Screen 3. Select the screen because we will retrieve calibration files for a WFPC2 observation. The screen will be displayed. 4. Specify criteria. We want to specify a particular dataset, so press the arrow keys until the cursor moves to the "Rootname" field, and then type the dataset name for the observation whose calibration files will be retrieved. For example, enter U2900101T (see Figure 4.13). Figure 4.13: WFPC Reference Files - Search Specification Screen (Constrained) 5. Click on the [ Begin Search ] button to submit your catalog search request. The screen will be displayed for the observation that you specified (Figure 4.14). Figure 4.14: WFPC2 Reference Files - Search Results Screen 6. Click [ Mark USED Files for One Dataset ] to mark for retrieval those calibration files actually used to calibrate the dataset. If the files listed in the RECOMMENDED column differ from those in the USED column, then you can click on [ Mark RECOMMENDED for One Dataset ] to retrieve the calibration files that are now recommended for calibrating the data. 7. Click the [ Retrieve Marked Data ] button to begin the retrieval process for the marked reference files. Continue with the data retrieval procedures as outlined in "Retrieving Datasets From the Archive" on page 97. The defaults on the screen, i.e., "Calibrated", will return the correct files for the specified calibration reference files and tables. Retrieving a File By Name If you know the name of a file or dataset that you wish to retrieve from the Archives, you can retrieve it directly using the [ Add Datasets by Name ] button from the screen. To get to this screen from the screen, pull down the | Commands | menu in the menu bar (see Figure 4. 1) and choose the [ Retrieve Marked Datasets ] option. This will place you in the screen. Choose the [ Add Datasets by Name ] command from the screen (or use [ Add Datasets from File ] if you have a list of dataset names). Enter the rootname (no extension) of the calibration reference file or science file you wish to retrieve. When you have added all the rootnames for the files you wish to retrieve, click [ Submit Request ] and proceed with the retrieval process as described on page 97. CHAPTER 5 Observation Logs In This Chapter.. What are Observation Log Files? How to Access Observation Log Files How to Use Observation Log Files This chapter describes the new Observation Log Files that are available to General Observers. These files can be used to obtain pointing and jitter data for any HST observation. Instructions for retrieving the files from the HST Archive via StarView and examples of how to use STSDAS to extract meaningful information from the files are provided. What are Observation Log Files? A set of pointing and specialized engineering data, called observation logs, are now being provided to General Observers. These data are produced by the Observatory Monitoring System (OMS), an automated software system that interrogates the HST engineering telemetry and correlates the time-tagged engineering stream with the scheduled events as determined from the Mission Schedule, the seven-day command and event list that drives all HST activities. The Observatory Monitoring System reports the status of the instruments and observatory and flags discrepancies between planned and executed actions. Each log contains the engineering data in a time period encompassing the science exposure. The data are in FITS format and may be converted to STSDAS binary table format using the STSDAS strfits task and manipulated using the STSDAS table tools (ttools). The observation logs are named according to the first eight characters of the root name (plus the letter J) The observation log contains a variety of interesting information and are necessary in evaluating and reconstructing pointing stability and environment during a given exposure. The files are intended for General Observers, STScI scientists, and engineers. OMS provides observers (and archival researchers) with observational information about guide star acquisition, pointing, and tracking that is not normally provided in the science headers. OMS was installed in operations during October 1994, at which time the cmh, cmj, cmi files were generated. Pointing and tracking information prior to October 1994 is not routinely available to observers but can be requested by sending e-mail to help@stsci.edu. OMS observation logs changed to the .ji/.jid image format after August 1995. The cmi table continued to be generated, but is renamed .jit to keep the naming convention consistent with the new software. A more in-depth description of OMS can be found in Observation Logs, by O. Lupie and B. Toth. This document can be requested via e-mail to help@stsci.edu. In the OMS version of August 1995, cmj tables were replaced with a jitter image, which is a two-dimensional histogram of jitter excursions during the observation. The jitter image has an extension of .jih for the header and .jid for the image data. The .cmi table accompanies the jitter image. The header file of the image replaces the .cmh file but includes the same information with the addition of some image-related keywords. Contents of Headers, Tables, and Jitter Images The OMS data header file (.cmh or .jih) contains information about the observation. The header is divided into groups of keywords that deal with a particular topic (i.e., SPACECRAFT DATA, BACKGROUND LIGHT, POINTING CONTROL DATA, and LINE OF SIGHT JITTER SUMMARY). A short description of each keyword is provided in the header. Table 5.1 lists the OMS data file extensions and the corresponding file contents. Table 5.1: Observation Management System (OMS) Observation Logs Extension Content ------------------------------------------------------------------------------ October 1994 to August 1995 .cmh OMS header .cmi High time resolution (IRAF table) .cmi Three-second averages (IRAF table) After August 1995 .jih/.jid Two-dimensional histogram (image) .jit Three-second averages (IRAF table) ------------------------------------------------------------------------------ In the following section we give brief descriptions of the contents of these headers and tables. rootnamej.cmh: The ASCII header file contains the time interval, the rootname, averages of the pointing and spacecraft jitter, the guiding mode, guide star information, and alert or failure keywords. Figure 5.1 is a representative header. Note that this header is similar to the headers attached to the .cmi tables with the exception that the ASCII file header (.cmh) contains problem flags and status warnings. A header associated with the jitter image replaces the .cmh header in a later version of OMS. It does, however, contain the same information and keywords. rootnamej.cmj: This table presents the data at the highest time resolution for the telemetry mode in use. It contains the reconstructed pointing, guide star coordinates, derived jitter at the instrument aperture, and pertinent guiding-related flags. The intent is twofold: (1) to provide high-time resolution jitter data for deconvolution calculations or for assessing small aperture pointing stability, and (2) to display the slew and tracking anomaly flags with the highest resolution. Table 5.2 lists the table column heading, units and a brief definition. Table 5.2: Contents of .cmj Table Parameter Units Description ------------------------------------------------------------------------------ seconds seconds Time since window start V2 dom arcseconds Dominant FGS V2 coordinate V3 dom arcseconds Dominant FGS V3 coordinate V2 roll arcseconds Roll FGS V2 coordinate V3 roll arcseconds Roll FGS V3 coordinate SI V2 arcseconds Jitter at aperture reference SI V3 arcseconds Jitter at aperture reference RA degrees Right ascension of aperture reference DEC degrees Declination of aperture reference Roll degrees Angle between North and +V3 DayNight 0,l flag Day (0) or night (1) Recenter 0,l flag Recentering status TakeData 0,l flag Vehicle guiding status SlewFlag 0,l flag Vehicle slewing status ------------------------------------------------------------------------------ Figure 5.1: A Representative cmh or. j ih Header rootnamej.cmi: This table contains data that were averaged over three-second intervals. It includes the same information as the cmj table and also includes orbital data (e.g., latitude, longitude, limb angle, magnetic field values, etc.) and instrument-specific items. This table is best suited for quick-look assessment of pointing stability and for studying trends in telescope or instrument performance with orbital environment. Table 5.3 lists the table column heading, units and a brief definition. Table 5.3: Contents of cmi Table, Three-Second Averaging Parameter Units Description ------------------------------------------------------------------------------ seconds seconds Time since window start V2 dom arcseconds Dominant FGS V2 coordinate V3 dom arcseconds Dominant FGS V3 coordinate V2 roll arcseconds Roll FGS V2 coordinate V3 roll arcseconds Roll FGS V3 coordinate SI V2 AVG arcseconds Mean jitter in 3 seconds SI V2 RMS arcseconds rms jitter in 3 seconds SI V2 P2P arcseconds Peak jitter in 3 seconds SI V3 AVG arcseconds Mean jitter in 3 seconds SI V3 RMS arcseconds rms jitter in 3 seconds SI V3 P2P arcseconds Peak jitter in 3 seconds RA degrees Right ascension of aperture reference DEC degrees Declination of aperture reference Roll degrees Angle between North and +V3 LimbAng degrees Angle between earth limb and target TermAng degrees Angle between terminator and target LOS-Zenith degrees Angle between HST zenith and target Latitude degrees HST subpoint latitude Longitude degrees HST subpoint longitude Mag Vl,V2,V3 degrees Magnetic field along VI, V2, V3 EarthMod V Mag/arcsec^2 Model earth background light SI-Specific Special science instrument data DayNight 0,1 flag Day (0) or night (1) Recenter 0,1 flag Recentering status TakeData 0,1 flag Vehicle guiding status SlewFlag 0,1 flag Vehicle slewing status ------------------------------------------------------------------------------ How to Access Observation Log Files For data taken after October 20, 1994, observation log files can be retrieved from the HST Archive. Unlike science data, which generally has a one-year proprietary period, observation log files become public as soon as they are archived. To access the observation log files, you will use StarView to retrieve the files from the Archive: 1. Enter the rootnames of the observation log files that you want on StarView's screen (the names can be entered individually, or read from a file on your disk). The rootname will have the same first 8 characters as the science data, but will be followed by the letter "J" instead of the usual "T". For example, if the science rootname is U26MO801T, then the observation log rootname will be U26MO801J. 2. Click on [Submit Request]. Figure 5.2: Retrieving Observation Log Files in StarView 3. Now, on the Screen, select "Observation Log Files", as shown in Figure 5.3. Again, click on [Submit Request], then enter your archive username and password, and the destination information. Figure 5.3: Choosing Observation Log Files in StarView How to Use Observation Log Files Here are some simple examples of what can be learned from the observation log files. Guiding Mode Unless requested, all observations will be scheduled with FINE LOCK guiding, which may be one or two guide stars (dominant and roll). The spacecraft may roll slightly during an observation if only one guide star is acquired. The amount of roll depends upon the gyro drift at the time of the observation, the location during an orbit, and the lever arm from the guide star to the center of the aperture. There are three commanded guiding modes: FINE LOCK, FINE LOCK/GYRO, and GYRO. OMS header keywords GUIDECMD (commanded guiding mode) and GUIDEACT (actual guiding mode) will usually agree. If there was a problem, they won't agree and the GUIDEACT value will be the guiding method actually used during the exposure. If the acquisition of the second guide star fails, the spacecraft guidance, GUIDEACT, may drop from FINE LOCK to FINE LOCK/GYRO, or even to GYRO, which may result in a target rolling out of an aperture. Check the OMS header keywords to verify that the guiding mode was that which was requested, or for the archive user, to verify that there was no change in the requested guiding mode during the observation. Until new flight software (version FSW 9.6) came online in September 1995, if the guide star acquisition failed, the guiding dropped to COARSE track. After September 1995, if the guide star acquisition failed, the tracking did not drop to COARSE track. Archival researchers may find older datasets that were obtained with COARSE track guiding. The dominant and roll guide star keywords (GSD and GSR) in the OMS header can be checked to verify that two guide stars were used for guiding. Or in the case of an acquisition failure, to identify the suspect guide star. The following list of .cmh keywords is an example of two star guiding. GSD_ID = `0853601369 ` / Dominant Guide Star ID GSD_RA = 102.42595 / Dominant Guide Star RA (deg) GSD_DEC = -53.41362 / Dominant Guide Star DEC (deg) GSD_MAG = 11.251 / Dominant Guide Star Magnitude GSR-ID = `0853602072 ` / Roll Guide Star ID GSR_RA = 102.10903 / Roll Guide Star RA (deg) GSR_DEC = -53.77683 / Roll Guide Star DEC (deg) GSR_MAG = 12.426 / Roll Guide Star Magnitude If you suspect that a target has rolled out of the aperture during an exposure, you can quickly check the counts in each group of the raw science data. As an example, the following IRAF commands can be used to determine the counts in each group. > cl > grlist z2o4040dt .d0h 1-24 > groups.lis > imstat @groups.lis Some observations can span several orbits. If during a multiple orbit observation the guide star reacquisition fails, the observation may be terminated with possible loss of observing time, or switch to other less desirable guiding modes. The GSACQ keyword in the .cmh header will state the time of the last successful guide star acquisition. GSACQ = `136:14:10:37.43 ` / Actual time of GS Acquisition Completion Guide Star Acquisition Failure The guide star acquisition at the start of the observation set could fail if the FGS fails to lock onto the guide star. The target may not be in the aperture, or for an extended target, only a piece of the target may be in the aperture. The jitter values will be increased because FINE LOCK was not be used. The following list of cmh header keywords indicate that the guide star acquisition failed. V3_RMS = 19.3 / V3 Axis RMS (milli-arcsec) V3-P2P = 135.7 / V3 Axis peak to peak (milli-arcsec) GSFAIL = ` DEGRADED' / Guide star acquisition failure! The observation logs for all of the following observations in the observation set will have the "DEGRADED" guide star message. This is not a "Loss of Lock" situation, but is an actual failure to acquire the guide star in the desired guiding mode. For the example above, the guiding mode dropped from FINE LOCK to COARSE TRACK. GUIDECMD= `FINE LOCK Commanded Guiding mode GUIDEACT= `COARSE TRACK Actual Guiding mode at end of GS acquisition If the observational dataset spans multiple orbits, the guide star will be re-acquired, but the guiding mode will not change from COARSE TRACK. In September 1995, the flight software was changed so that COARSE TRACK is no longer an option. The guiding mode drops from two guide star FINE LOCK to one guide star FINE LOCK, or to GYRO control. Moving Targets and Spatial Scans A type 51 slew is used to track moving targets (planets, satellites, asteroids, and comets). Observations are scheduled with FINE LOCK acquisition, i.e., with two or one guide stars. Usually, a guide star pair will stay within the pickle during the entire observation set, but if two guide stars are not available, a single guide star may be used, assuming the drift is small or the proposer says that the roll is not important for that particular observing program. An option during scheduling is to drop from FGS control to GYRO control when the guide stars move out of the FGS. Also, guide star handoffs (which are not a simple dropping of the guide stars to GYRO control) will affect the guiding and may be noticeable when the jitter ball is plotted. The jitter statistics are accumulated at the start of the observation window. Moving targets and spatial scan motion will be seen in the jitter data and image. Therefore, the OMS header keywords V2_RMS and V3_RMS values (the root mean square of the jitter about the V2 and V3 axis) can be quite large for moving targets. Also, a special anomaly keyword (SLEWING) will be appended to the OMS header stating movement of the telescope during the observation. This is expected for observing moving targets. The following list of .cmh header keywords is an example of expected values while tracking a moving target. / LINE OF SIGHT JITTER SUMMARY V2_RMS = 3.2 / V2 Axis RMS (milli-arcsec) V2_P2P = 17.3 / V2 Axis peak to peak (milli-arcsec) V3_RMS = 14.3 / V3 Axis RMS (milli-arcsec) V3_P2P = 53.6 / V3 Axis peak to peak (milli-arcsec) RA_AVG = 244.01757 / Average RA (deg) DEC_AVG = -20.63654 / Average DEC (deg) ROLL_AVG= 280.52591 / Average Roll (deg) SLEWING = ` T' / Slewing occurred during this observation High Jitter The spacecraft may shake during an observation, even though the guiding mode is FINE LOCK. This may be due to a micro-meteorite hit, jitter at a day-night transition, or for other unknown reasons. The FGS is quite stable and will track a guide star even during substantial motion. The target may move about in an aperture, but the FGS will continue to track guide stars and reposition the target into the aperture. For most observations, the movement about the aperture during a spacecraft excursion will be quite small, but sometimes, especially for observations with the spectrographs, the aperture may move enough that the measured flux for the target will be less than a previous group. Check the OMS header keywords (V2_RMS, V3_RMS) for the root mean square of the jitter about the V2 and V3 axis. The following list of .cmh header keywords is an example of typical guiding rms values. / LINE OF SIGHT JITTER SUMMARY V2_RMS = 2.6 / V2 Axis RMS (milli-arcsec) V2_P2P = 23.8 / V2 Axis peak to peak (milli-arcsec) V3_RMS = 2.5 / V3 Axis RMS (milli-arcsec) V3_P2P = 32.3 / V3 Axis peak to peak (milli-arcsec) Recentering events occur when the spacecraft software decides that shaking is too severe to maintain lock. The FGS will release guide star control and within a few seconds reacquire the guide stars. It is assumed the guide stars are still within the FGS field of view. During the recentering time, INDEF will be written to the OMS table. Recentering events are tracked in the OMS header file. Be careful when interpreting "Loss of Lock" and "Recentering" events that occur at the very beginning or at the end of the OMS window. The OMS window is larger than the observation window. These events might not affect the observation since the observation start time will occur after the guide stars are acquired (or re-acquired), and the observation stop time may occur before the "Loss of Lock" or "Recentering" event that occurred at the end of an OMS window. The following IRAF command shows how to plot time vs. jitter along the direction of the V3 axis (see Figure 5.4): cl> sgraph "u26m080lj.cmi seconds si_v2_avg" Figure 5.4: Plotting Jitter Along V3 Axis To get an idea of pointing stability, plot jitter along the V2 axis vs. jitter along the V3 axis (see Figure 5.5): st> sgraph "u26m0801j.cmi si_v2_avg si_v3_avg" Figure 5.5: Plotting V2 vs. V3 Jitter The tstatistics task can be used to find the mean value of the si_v3_avg column--the amount of jitter (in arcseconds) in the direction of the V3. This value could be used in software such as TinyTim to model jitter in a PSF. In this example, the mean jitter is ~3 mas, which is typical for post-servicing mission data: Figure 5.6: Averaging a Column with tstatistics tt> tstat u26mO8Olj.cmi si_v3_avg # u26mO8Olj.cmi si_v3_avg # nrows mean stddev median min max 11 -0.003006443888 0.00362533 -7.17163E-4 -0.00929515 0.00470988 Understanding and interpreting the meaning of the table columns and header keywords is critical to understanding the observation logs. Please read the available documentation and contact the STScI Help Desk (help@stsci.edu) if you have any questions about the files. Documentation is available via the at the following URL: http://ra.stsci.edu/documents/OL/OL_1.html PART 3 FAINT OBJECT SPECTROGRAPH This part introduces the basics of the Faint Object Spectrograph (FOS) and gives guidelines for reducing FOS spectra. It is intended as a general tool for helping users to work with FOS data taken at any time, be it pre-COSTAR or post-COSTAR. This approach differs from that of the FOS Instrument Handbook, which gives up-to-date information on the instrument performance for use in submitting HST observing proposals. The current version of the Instrument Handbook (version 6.0) will be the last if the FOS is taken out of the telescope, as planned, during the next servicing mission in February of 1997. This part provides general information, including hints and pointers telling you how to get more detailed information about handling data in each special case. Several data reduction steps will be explained, showing before- and after-processing examples of data exhibiting certain characteristics. The FOS information spans several chapters: first, we will introduce the instrument and the different modes in which it takes data. Then we will describe how to compare a planned observation with the dataset returned from an executed observation to determine how the real data compares to what you expected. This will be followed by a description of how data are calibrated in the Routine Science Data Processing (RSDP) pipeline, problems that can arise in this procedure, and how-if necessary-you can recalibrate your data with updated calibration files. We also describe some calibration problems specific to FOS data and discuss some of its specific instrument characteristics. HST data quality is not a fixed property! Calibration data are taken and evaluated continuously, allowing improved calibration of new data, previously delivered data, and data from the archives This document can help you realize the highest possible data quality, even if your data show no obvious signs of calibration error. More Information on FOS Additional information about the Faint Object Spectrograph can be requested through the Help Desk via e-mail to: help@stsci.edu. The listing of all available documents is available through STEIS using the following URL: http://www.stsci.edu/ftp/instrument_news/FOS/fos_bib.html ------------------------------------------------------------------------------ CHAPTER 12 FOS Instrument Overview In This Chapter.. Instrument Basics Observing Modes This chapter provides the most basic information needed to understand the geometry of the FOS instrument and the optical components affecting the light path. Quantitative descriptions of the instrument performance and capabilities are found in the FOS Instrument Handbook. Instrument Basics The FOS has two Digicon detectors with independent optical paths (Figure 12.1). The Digicons operate by accelerating photoelectrons emitted by a transmissive photocathode onto a linear array of 512 diodes. The blue detector (FOS/BL) is sensitive from 1150 A to 5400 A, while the red Digicon (FOS/RD) covers the wavelength range from 1620 A to 8500 A. Figure 12.2 shows the quantum efficiencies of both detectors. Note that the graphs in Figure 12.2 are provided for illustration only; the plotted values are not accurate enough for quantitative use in data analysis. The general characteristics of FOS/BL and FOS/RD are compiled in Table 12.1. Details, such as the spectral resolution as a function of aperture size and source extent, can be found in the FOS Instrument Handbook. The FOS has several apertures for different scientific purposes. There is a large aperture for acquiring targets using on-board software (3".7 x 3".7; designation 4.31). The aperture sizes are post-COSTAR values, while the designations are pre-COSTAR, and are therefore somewhat inconsistent. Since the diode array extends only 1".3 in the direction perpendicular to the dispersion, this largest aperture has an effective collecting area of 3".7 x 1".3. Other apertures include several circular apertures with sizes O".86 (1.0), O".43 (0.5), and O.26 (0.3), as well as paired square apertures with sizes O".86 (1.0-PAIR), O".43 (0.5-PAIR), O".21 (0.25-PAIR), and O".09 (0.1-PAIR), for isolating spatially resolved features and for measuring the sky. In addition, a slit and two barred apertures are available (Figure 12.1 and Table 12.2). Figure 12.1: FOS Optical Path Figure 12.2: Detector Quantum Efficiencies Table 12.1: Detector Characteristics Attribute Value ------------------------------------------------------------------------------ Wavelength coverage FOS/BL: 1150A to 5400A in several grating settings. FOS/RD:1620A to 8500A in several grating settings. Spectral resolution High: lambda/deltalambda ~ 1300. Low: lambda/deltalambda ~ 250. Time resolution delta t > 0.033 seconds. Acquisition aperture 3.7" x 3.7" (4.3). Science apertures Largest: 3.7" x 1.3" (4.3). Smallest: 0.09" square paired (0.1-PAIR). Brightest stars V ~ 9 for BOV, V ~ 7 for G2V, depending on spectral observable type and wavelength Dark count rate FOS/BL: 0.0064 counts s^-1 diode^-1 FOS/RD: 0.0109 counts s^-1 diode^-1 Example exposure F_1300=2.5 x 10^-13, SNR=20/(1.0A), t=260s. times F_2800=1.3 x 10^-13, SNR=20/(2.0A), t=10s (FOS/BL). 0.9" (1.0) aperture F_2800=1.3 x 10^-13, SNR=20/(2.0A), t=6.6s (FOS/RD). ------------------------------------------------------------------------------ A variety of dispersers is available on the FOS; their designations and basic properties are collected in Table 12.3. Each disperser directs incoming light onto a different location on the photocathode. From the photocathode, electrons are deflected magnetically without magnification onto the diode array of the Digicon. In this way the light transmitted by the aperture is projected onto the diode array. The relative size of the different apertures as they are projected onto a section of the diode array is displayed in Figure 12.3. The individual diodes are spaced O".31 (post-COSTAR) or O".36 (pre-COSTAR) along the dispersion direction and are 1".29 (1".43) tall perpendicular to it. In this handbook, we refer to the dispersion direction as the x-direction, the y-axis of the FOS is perpendicular to the dispersion and we call this the y-direction. The deflection of the photoelectrons is controlled by an internal magnetic field, which in turn depends on a high-voltage setting. The unit of distance in the y-direction is the so-called y-base unit. The high voltage is adjusted so that 256 y-base units correspond to the height of the diodes (pre and post-COSTAR). Table 12.2: FOS Apertures Designation Size Separation (Header Number Shape Designation) ------------------------------------------------------------------------------ 0.3 Single Round 0.26 dia NA (B-2) 0.5 Single Round 0.43 dia NA (B-1) 1.0 Single Round 0.86 dia NA (B-3) 0.1-PAIR Pair Square 0.09 2.57 (A-4) 0.25-PAIR Pair Square 0.21 2.57 (A-3) 0.5-PAIR Pair Square 0.43 2.57 (A-2) 1.0-PAIR Pair Square 0.86 2.57 (C-1) 0.25X2.0 Single Rectangular 0.21 x 1.71 NA (C-2) 0.7X2.0-BAR Single Rectangular 0.60 x 1.71 NA (C-4) 2.0-BAR Single Square 1.71 NA (C-3) BLANK NA NA NA NA (B-4) 4.3 Single Square 3.66 x 3.71 NA (A-1) FAILSAFE Pair Square 0.43 and 3.7 NA ------------------------------------------------------------------------------ Table 12.3: Dispersers Available on FOS Blue Digicon Diode No. Low alpha Diode No. High alpha delta alpha Blocking Grating Low alpha A High alpha A (A-Diode^-1) Filter ------------------------------------------------------------------------------ G130H 53 1140^a 516^b 1606 1.00 G190H 1 1573 516 2330^c 1.47 G270H 1 2221 516 3301 2.09 SiO2 G400H 1 3240 516 4822 3.07 WG 305 G570H 1 4574 516 6872^d 4.45 WG 375 G160L 319 1140^a 516 2508^c 6.87 G650L 295 3540 373 9022^d 25.11 WG 375 PRISME 333 1500^f 29 6000^d Red Digicon^i G190H 503 15909 1 2312 -1.45 G270H 516 2222 1 3277 -2.05 SiO2 G400H 516 3235 1 4781 -3.00 WG 305 G570H 516 4569 1 6818 -4.37 WG 375 G780H 516 6270 126 8500^h -5.72 OG 530 G160L 124 15719 1 2424 -6.64 G650L 211 3540 67 7075 -25.44 WG 375 PRISME 237 1850 497 8950h ------------------------------------------------------------------------------ a. The blue Digicon MgF2 faceplate absorbs light shortward of 1140 A. b. The photocathode electron image typically is deflected across 5 diodes, effectively adding 4 diodes to the length of the diode array. c. The second order overlaps the first order longward of 2300 A, but its contribution is at a few percent. d. Quantum efficiency of the blue tube is very low longward of 55oo A. e. PRISM wavelength direction is reversed with respect to gratings of the same detector. f. The sapphire prism absorbs some light shortward of 1650 A. g. The red Digicon fused silica faceplate strongly absorbs light shortward of 1650A. h. Quantum efficiency of the red detector is very low longward of 8500 A. i. Dispersion direction is reversed for FOS / RD relative to FOS / BL. Figure 12.3: Aperture Sizes Projected On Diode Array In order to minimize external influences on the magnetic deflection of electrons from the photocathode onto the diode array, both Digicons are magnetically shielded. However--especially on the red side--the shielding is inadequate. Thus, the telescope's orientation relative to the earth's magnetic field influences the characteristics of the FOS. This is the so-called geomagnetically induced image motion (GIM) problem. In order to minimize this effect, on-board software was developed to compensate for this error. A residual uncertainty remains, however, which affects the calibration accuracy of the instrument, as described further on page 238. Target Acquisition Before observing a target, it must, naturally, be located in the instrument aperture. Several techniques for target acquisition have been developed and will be described in this chapter. Only after the target is acquired can the science exposure be started. We will describe science data acquisition in the next section. Note: Starting approximately November 1, 1992 (in Cycle 2) FOS calibration measurements are conducted with the highest possible pointing accuracy, i.e., with the target as close as possible (<= 0.04) to the center of the aperture. Thus, the FOS instrument performance is best understood for the very center of the apertures. The accuracy inherent to your choice of target acquisition mode will therefore determine the calibration accuracy that can later be reached, i.e., how closely STScI's calibrations apply to your observations. FOS data can be obtained through several different apertures, ranging from 0".09 to 3".7 in size for post-COSTAR data (O".1 to 4".3 pre-COSTAR). The target is centered in the chosen aperture using one of four methods: * Blind pointing. * One of the FOS target acquisition techniques listed below. * Via an interactive or @C2-assisted acquisition. * Via a GHRS-assisted acquisition. The FOS target acquisition modes and their corresponding header keyword values found in the standard header packets (.shh file) are: * Binary acquisition: (keyword value OPMODE=ACQ/BINARY). * Interactive: (keyword value OPMODE=ACQ). * Peak-up or peak-down: (keyword value OPMODE=ACQ/PEAK). * Firmware: (keyword value OPMODE=ACQ/FIRMWARE). In order to avoid unnecessary overhead times, a new technique has been developed for proposals that require more than one visit to a target within a few days (up to two months). This reuse target offsets method allows the instrument to use a target offset that was derived in the acquisition for the first visit during subsequent visits so that these later visits need only a single-stage peak-up/peak-down acquisition to reconfirm the correct centering of the target in the aperture. A much more detailed description of all different target acquisition modes is given in the FOS Instrument Handbook. Target acquisition observations produce science datasets. Thus, for each set of FOS observations of a given source the dataset taken first will be the FOS target acquisition image. This image will be easily identifiable, because the header keyword GRNDMODE will be set to ACQ, and the header keyword OPMODE (in the shh file) will be set to the requested target acquisition mode. With the FOS, it is also possible to take an image of the photocathode with the mirror in place following the target acquisition; such an image observation will have a keyword value of OPMODE=IMAGE. The target acquisition data can later be used to determine how well the source was centered in the aperture. Note: Although target acquisition data are not substepped and overscanned, they may still contain scientifically valuable information and are thus delivered to the observer on tape. Science Data Acquisition To maximize the science data output from the FOS, you would normally oversample spectra and shift the object spectrum with respect to the diode array during several subintegrations. These two procedures are called substepping and overscanning. Substepping is used to better sample the spectrum in the wavelength direction and overscanning is used to assure that each pixel in the final spectrum contains data received from multiple diodes (to smooth out diode-to-diode variations and insure against data loss when a single diode is disabled). Both substepping and overscanning rely on the magnetic focus assembly in the Digicon detector to magnetically deflect the photoelectrons in the dispersion direction so that they fall on slightly different locations on the diode array. For substepping, the spectrum is deflected by a fraction of a diode in the dispersion direction (where the fraction is given by 1/NXSTEPS and NXSTEPS is a header keyword). The diodes are read out into unique memory locations for each substep and the substepping is performed NXSTEPS times. For overscanning, the process of substepping is continued over more than one diode in the dispersion direction. A complete round of substepping is performed for each overscan step. The number of overscan steps performed is determined by the overscan parameter (header keyword OVERSCAN). Each time a given wavelength position is deflected onto and measured by a new overscan diode, its counts are co-added into the same memory location in the FOS microprocessor. When using the full diode array, the result is a spectrum with 512 * NXSTEPS plus a small number (NXSTEPS x (OVERSCAN - 1)) of edge pixels. Each pixel (excluding the edge pixels) has data contributed from the number of diodes specified by OVERSCAN. Thus, substepping changes the number of pixels in the final spectrum; and overscanning principally changes the number of diodes that contribute to a single pixel. Although the number of diodes in the diode array is only 512, the number of pixels in an ACCUM mode observation is given by the equation: # of pixels = ( # of diodes + (OVERSCAN - 1)) x NXSTEPS The default values of NXSTEPS=4 and OVERSCAN=5 yield a typical ACCUM mode spectrum of 2064 pixels. A given diode will have contributed to the data in (NXSTEPS x OVERSCAN) pixels. The mode in which a given dataset is taken is identified in the data headers by the keywords OPMODE and GRNDMODE. The OPMODE (.shh) and GRNDMODE ( .d0h, .c1h) keyword values are listed at the beginning of each of the following sections describing the individual modes and are also identified in the following list of available modes: * Spectrophotometry: OPMODE=ACCUM, GRNDMODE=SPECTROS-COPY * Time-resolved spectrophotometry: OPMODE=PERIOD, GRND-MODE=TIME RESOLVED * Rapid readout: OPMODE=RAPID, GRNDMODE=RAPID READOUT * Spectropolarimetry: OPMODE=ACCUM, GRNDMODE=SPECTROPO-LARIMETRY Spectrophotometry Mode Spectrophotometry mode is identified by an OPMODE keyword value of "ACCUM" and a GRNDMODE keyword value of "SPECTROSCOPY". For the standard ACCUM mode, the default value of NXSTEPS is 4 and OVERSCAN is 5 (see above). ACCUM mode spectra with a total exposure time lasting more than a few minutes are read out at regular intervals to the ground or to the onboard tape recorders. The frequent readouts protect against catastrophic data loss. Since the data are read out at regular intervals, all observations longer than a few minutes (the time between readouts is usually about two minutes for the red detector and usually about four minutes for the blue detector) are time resolved. Each readout is stored in the data files as follows: The first readout is stored as group one, the next readout is added (accumulated) to the previous readout and the sum is stored as group two, and so on. The last group contains the spectrum from the full exposure time of the observation. The number of groups per observation depends on the length of the exposure and the detector used. More information on the different data files from ACCUM observations is available in "Contents of Data Tapes" on page 211 and "Details of the FOS Pipeline Process" on page 235. Raw paired aperture data are stored as concatenated data per group, i.e., data from apertures A and B are stored together in one group in the raw data files, similar to spectropolarimetry data for each pass direction (instead of only one spectrum per group, as for the single apertures). Time-Resolved Spectrophotometry Mode Time-resolved spectrophotometry mode is identified by the keyword values OPMODE=PERIOD and GRNDMODE=TIME RESOLVED. This mode is normally used for objects with known periodicity in the 50 msec to 100 sec range. To maintain the phase information of these observations, the known period (CYCLE-TIME) of the object is divided into bins or slices, where each bin has a duration time = period/BINS. The spectra acquired in this mode are stored in the different bins which correspond to a given phase of the period. The information obtained in each period is added correctly to the pattern so that the phase information is maintained (so long as the period is known accurately). Note that relativistic aberration is important for short periods and long observations). However, there is no correction for light travel time across the orbit. The raw ( .d0h) data file for time-resolved mode contains a single data group that is made up of all the individual spectral slices (or bins) stored sequentially. For example, if an observation used 374 detector channels, with NXSTEPS=1, OVERSCAN=5, and SLICES=32, the .d0h file would contain one data group having a total length of: (374 + (5 - 1)) x 1 x 32 = 12096 pixels The calibrated flux, wavelength, error, and data quality files will have the data from the individual slices (bins) broken out into separate groups. For the example above, the .c01, .c1h, .c2h, and .cqh files would have 32 groups of 378 pixels. The .c3h file is organized as follows. Groups 1 and 2 contain the average flux and average errors, respectively, of all the individual calibrated spectra. Following these, there are pairs of groups where the first group in each pair contains the difference between an individual flux spectrum and the average, and the second group in each pair contains the sum of the errors for the individual spectrum and the average. See "Details of the FOS Pipeline Process" on page 235 for details on how the average and difference spectra are generated. For example, if the observation consisted of 32 slices, the structure of the .c3h file would be that shown in Table 12.4. Table 12.4: Group Structure of .c3h File with 32 Slices Group # Contents 1 Average of all 32 flux spectra from the .c1h file 2 Average of all 32 error spectra from the .c2h file 3 Spectrum 1 minus average 4 Combined spectrum 1 and average errors 5 Spectrum 2 minus average 6 Combined spectrum 2 and average errors . . . 65 Spectrum 32 minus average 66 Combined spectrum 32 and average errors Rapid Readout Mode Rapid readout mode is identified by the keyword values OPMODE=RAPID and GRNDMODE=RAPID READOUT. For certain astronomical targets where rapid variability in flux is suspected, but the precise period is unknown, or the expected variation is aperiodic, the PERIOD mode of data acquisition is unsuitable because the bin folding period must be specified. In such cases the RAPID readout mode is used. In this mode, the data are acquired using the normal substepping and overscanning techniques. The spectra are read out at intervals (chosen by the observer according to the scientific goals) that are much shorter than the nominal 4 minutes (blue detector) or 2 minutes (red detector). Each readout is stored in the raw data file as a group. The number of groups in a RAPID mode observation .d0h file is equal to the number of individual readouts. The .c3h (special mode processing output) file contains two data groups. The number of pixels in each group is equal to the number of readouts (groups) in the original data. Group 1 of the .c3h file contains the summed flux values where the value of each pixel is the sum of all pixels from an original readout (i.e., pixel 1 contains the sum of all pixel values from readout 1, pixel 2 is the sum of all pixels from readout 2, etc.). Group 2 contains the sum of the corresponding statistical error values (in quadrature). The .c3h files effectively provide the light curve of the target for the length of the observation. See "Details of the FOS Pipeline Process" on page 235 for more details on special mode processing. The IRAF/STSDAS task to combine groups in such a dataset is rcombine (see Chapter 2). Spectropolarimetry Mode Spectropolarimetry mode3 is identified by an OPMODE value of "ACCUM" and a GRNDMODE value of "SPECTROPOLARIMETRY". The polarimetry data consist of a number of exposures (POLSCAN= 16, 8, or 4) with the waveplate set at different angles and taken consecutively (within one orbit). The Wollaston prism splits the light beam into two spectra corresponding to the orthogonal directions of polarization. Hence, each exposure consists of the two orthogonal spectra obtained with a single waveplate angle. These spectra are deflected alternately onto the diode array, recorded as two pass directions, and stored as a single group in the raw data file. The first spectrum corresponds to the first pass direction (ordinary ray), the second to the second pass direction (extraordinary ray). The number of groups in the raw data file is equal to NREAD x POLSCAN. Thus, normally (for NREAD=1) there will be as many groups in the raw data file as the number of waveplate positions used in the observation. The number of POLSCAN positions (and therefore the total number of groups in the raw data file) may be 4, 8, or 16 depending on the number of polscans requested. Further details concerning polarimetry datasets and their calibration procedures can be obtained from within IRAF by typing "help spec-polar opt=sys". ------------------------------------------------------------------------------ CHAPTER 13 FOS Planned vs. Executed Observations In This Chapter... Contents of Data Tapes Headers and Keywords Binary Acquisition-ACQ/BIN Peak-up Acquisition-ACQ/PEAK Science Observations Engineering Data Contents of Data Tapes From either your data tape or the Archive you will receive various FITS files containing science data and other information. In the following we provide a description of what is stored where. The STSDAS routine for unpacking your data is strfits (see Chapter 3). The resulting files will have default extensions, as described in Table 13.1. The c*h files represent different stages of the calibration process, which will be described in detail below. The easiest way to get a quick glance at your spectra is by using the IRAF/STSDAS routine splot or fwplot if you want to plot wavelength vs. flux (see Chapter 2). Table 13.1: File Name Extensions Extension File Contents Raw Data Files shh/.shd Standard header packet .d0h/.d0d Science data image .q0h/.q0d Science data quality .d1h/.d1d Science trailer line .u1h/.u1d Unique data log .x0h/.x0d Science header line .xqh/.xqd Science header line data quality Calibrated Data Files .c0h/.c0d Calibrated wavel s .C1h/.c1d Calibrated fluxes .c2h/.c2d Propagated statistical error .c3h/.c3d Special statistics .c4h/.c4d Count rate .c5h/.c5d Flat-fielded object spectra .c6h/.c6d Flat-fielded sky spectra .c7h/.c7d Background spectra .c8h/.c8d Flat-fielded object minus smoothed sky spectra .cqh/.cqd Output data quality Detailed information about your data is given in several different places. We will give a brief overview of these sources here. Headers and Keywords Header files provide most of the information needed to reduce FOS data. The headers are divided into groups of keywords that deal with a particular topic. A description of each keyword is often provided in the header itself. Table 13.2 is a short description of the different topics covered in the various header files. The header files used most often are the standard header packet (.shh), the science data header file ( .d0h), and the calibrated science data header file (.c1h). Most of the information needed to understand the data is found in the header keywords that describe the general information and the processing and calibration information sections of the headers. Table 13.3 lists some of the important header keywords used to interpret FOS data. Table 13.2: Types of Information in FOS Header Keywords Keyword Type Information in Keywords Source ------------------------------------------------------------------------------ General Information General data General structure information for data All headers file in standard FITS style Group Parameters: Acquisition data description, including All headers OSS time of acquisition (modified Julian date), maximum and minimum data values, and axes information Group Parameters: Observation type and ground-based GIM Calibrated PODPS correction values from GIMP-CORR files Generic Conversion Existence of science trailer line and .d0h Keywords reject array FOS Descriptor Description of FOS file and GIM correction All headers COSTAR Keywords Positions of the COSTAR FOS MI mirror .shh Engineering Information Time Conversion Spacecraft and Universal time at start of .shh observation CDBS Keywords in Engineering data regarding temperatures, .shh SHP currents, and voltages at various points in the instrument CDBS Keywords in Data acquisition details, such as number .ulh UDL of channels used, value of magnetic field deflections used Processing and Calibration Information Statistics Processing information .shh, .d0h, and Keywords calibrated data headers Calibration Flags Type of observation and configuration of and Indicators aperture, grating, and detector .d0h and calibrated data headers Calibration Reference files and tables for calfos Reference Files processing (either used or to be used) .d0h and & Tables calibrated data headers Calibration Calibration steps for calfos processing .d0h and Switches (either used or to be used) calibrated data headers Pattern Keywords Magnetic field deflection pattern used .d0h and all in acquiring the data calibrated data header files Calibration Observing time, user-supplied GIM offset .d0h and Keywords table name, LIVETIME, DEADTIME, position calibrated angle of aperture, burst noise rejection data headers limit Aperture Position Aperture position in RA and Dec .d0h and calibrated data headers Exposure Exposure information and commanded FGS .d0h and Information lock calibrated data headers Observer-Supplied Observing Information from Phase II Proposal Support Schedule: Information on cover page of proposal and .shh Program Info type of output data requested by GO Support Schedule: Type of observation requested by GO, for .shh Flags and example, the Indicators aperture, the Indicators detector, the number of channels, etc. Proposal Info Observing strategy, e.g., instrument .shh configuration, target description, and information on flux, exposure, moving target, spatial scan etc. Target and Target and PEP information .shh Proposal ID Observing Information Produced in TRANS Stage Support Schedule: Telescope pointing and instrument .shh Data Group II configuration on the sky, i.e., target RA and Dec and offset objects, position angle of diode array, OFFSET information, spacecraft velocity, guide stars, etc. Onboard Ephemeris Spacecraft ephemeris .shh Model ------------------------------------------------------------------------------ Table 13.3: FOS Header Keywords Keyword Description and Comments ------------------------------------------------------------------------------ General Information from Header File-usually .d0h or.c1h GCOUNT Number of groups in data file YTYPE Nature of observation, important for paired aperture observations. (Not real values in .d0h file). Values are OBJ, BKG, or SKY YPOSN Location of diode center in Y-base units, of the nth group, useful for interpreting ACQ/BIN data. If there is only one group then YPOS is the Y-base of that one group. Not populated with real values in the .d0h file YBASE YPOS of group #1 XBASE XDAC units needed to center aperture on the diode array for group #1 BUNIT Flux units of the data. Values: COUNTS, COUNTS/SEC, ERGS/SEC/ CM^2/A, or ANGSTROM FILLCNT Number of sequences of filled data ERRCNT Number of sequences with bad data INSTRUME Instrument used for the observation. This will be FOS. ROOTNAME Rootname of the observation set. Will start with letter "y" FILETYPE Type of data in the file: SHP is science header packet, UDL is unique data log, SDQ is raw science data quality, WAV is wavelength, FLX is calibrated flux, ERR is calibrated flux error, MOD is CALFOS special mode processed data, SCI is object, sky, or background science data, OBJ is object data, BKG is background data, CDQ is calibration data quality, SKY is sky data, NET is sky-subtracted object data GRNDMODE Ground software mode of FOS. Can be SPECTROSCOPY, TARGET ACQUISITION, IMAGE, RAPID-READOUT, SPECTRO-POLARIMETRY, or TIME-RESOLVED DETECTOR Detector in use for the observation. AMBER or BLUE APER-ID Aperture used for the observation; A-1 corresponds to the 4.3", A-2 to the 0.5" pair (square), A-3 to the 0.25 pair (square), A-4 to the 0.1 pair (square), B-1 to the 0.5" (round), B-2 to the 0.3" (round), B-3 to the 1.0" (round), B-4 is blank, C-1 to the 1.0" pair (square), C-2 to the 0.25"x2.0" slit, C-3 to the 0.7"x2.0" bar, and C-4 to the 2.0" bar apertures respectively POLAR-ID Polarization waveplate used for the observation. A is the waveplate A, B is the waveplate B and C is no polarizer used (clear) FGWA-ID Filter and grating used for the observation. Hxx means GxxOH filter, L15 means G160L, L65 means G650L, PRI means PRISM, and CAM means camera (mirror). POLANG Initial angular position of the polarizer in degrees FCHNL First diode used in observation (first diode in array is designated as zero) NCHNLS Number of diodes used in the observation, useful for interpreting ACQ/BIN data and exposure time. Usually 512 (except target acquisition and other specific modes, see below) OVERSCAN Number of overscans used in the observation, useful for interpreting ACQ/BIN data and exposure time. Usually 5 NXSTEPS Number of X substeps used in the observation, useful for interpreting ACQ/BIN data and exposure time. Usually 4 MINWAVE Minimum wavelength in A. (Not populated in .c0h file) MAXWAVE Maximum wavelength in angstroms. (Not populated in. .c01 file) YFGIMPEN Onboard GIM correction enabled. T or F KYDEPLOY COSTAR mirror deployment for the FOS. T or F Exposure 77me Information-usually in.d0h or.c1h FPKTRIME Time of first data packet sent to the SDF, i.e., time at the end of the group exposure. The units are modified Julian date. Each group has its own unique FPKTIME LPKTFIME Time of the last data packet sent to the SDF. The units are modified Julian date. DATE-OBS FPKTTIME of group 1 converted to standard notation for date TIME-OBS FPKTTIME of group 1 converted to standard notation for time, truncated to integer value; thus these are only accurate to 1/8 of a second EXPSTART Exposure start time in modified Julian date EXPOSURE Exact exposure time per pixel in seconds for each group. Note that this keyword is not populated with real values in the .d0h file Pattern Keywords for Exposure Times-usually in .c0h or.c1h LIVETIME Time, in units of 7.8125 microseconds, during which accumulator is open DEADTIME Time, in units of 7.8125 microseconds, in which accumulator is closed INTS Number of repetitions of the live time/dead time cycle YSTEPS Number of Y substeps used in the observation. Usually 1 NPAT Number of patterns used per readout SLICES Number of repeats of the magnetic field deflection sequence. Usually 1 NREAD Number of readouts per memory clear. For the ACCUM mode this is usually the number of groups. For RAPID mode this is 1 NMCLEARS Number of memory clears per obs. 1 for ACCUM, number of groups for RAPID mode when NMCLEARS > 1 Aperture Orientation Information-usually .shh and.d0h of acquisition image OPMODE Operation mode of the FOS for the observation. Can be: ACQ, ACQ/BIN, ACQ/PEAK, ACQ/FIRMWARE, IMAGE, ACCUM, RAPID or PERIOD PA-APER Position angle of the aperture in degrees RA-APER1 RA of aperture center in degrees DECAPER1 Dec of aperture center in degrees The first thing you want to know about your observations is whether they executed as planned. If you are the principal investigator of a program, a look into the output of the RPS2 software from the preparation of your observations might be very helpful. A commonly available document for every HST program is the exposure log sheet, which you can retrieve from the PRESTO web page: http://presto.stsci.edu/public/propinfo.html In Figure 13.1 we show one exposure log sheet as an example. This will be used in the following to outline a few checks you can run on your data in order to assess whether what you see is what you wanted to get. In our example the FOS observations started with a four-stage ACQ/PEAK target acquisition at the beginning of the visibility period, followed by science observations using the FOS/RD detector with the G270H grating and the circular 1.0" aperture. The investigators used the occultation time to execute a side switch from FOS/RD to FOS/BL, which takes about 50 minutes. Then, in exposure number 5, they re-acquired the target with a single-stage peak-up in order to insure good pointing accuracy and then continued their science observations, now using the G190H grating for a total of 100 minutes. Looking at the data files, one can find out that this long integration was split into three parts, filling the remainder of the re-acquisition orbit and the subsequent two orbits. We will now first show how to find out whether the target acquisition succeeded and then point out some basic first checks of the science data. In order to check whether a target acquisition was successful you need to follow the step pattern of the acquisition and see whether the telescope zeroed in on the right object. We will describe here the two most commonly used FOS target acquisition methods, ACQ/BIN and ACQ/PEAK. Figure 13.1: Sample Exposure Log Sheet A binary target acquisition is performed with the 4.3" aperture in several steps. Figure 13.2 shows a sketch illustrating this. Figure 13.4: Binary Target Acquisition First, the three thirds of the 4.3" aperture (actual size now 3.7" x 3.7"), which is three times as high as the diode array, are successively imaged onto the diode array in a search for the brightest third. The order in which the search is performed is center - lower - upper third. 256 y-base units represent the height of the diode array (1.29"). The first three groups in the multi-group ACQ/BIN spectrum will thus have YPOS entries of YPOS[1], YPOS[2] = YPOS[1] - 256, and YPOS[3] = YPOS[1] + 256. If the search algorithm finds an object in one of the three integrations, it will go back to this third of the aperture and then, by offsetting, try to place the object exactly on one edge of the diode array. In a successful approach it will try in successively smaller y-steps to place the object exactly on the edge, by trying to reach exactly half the count rate measured earlier, when the full flux of the object was projected onto the diode array. Since the gradient of the HST point-spread function is very steep, this produces a good measure of the y-position of the target. The y-position of the target is (YPOS [last group] - 128) - YPOS [ 1 ]. ACQ/BIN spectra have 64 pixels (12 diodes are read out). The central pixel is number 32. Just plotting the spectra shows the pixel location of the maximum and from this, you can calculate the x-offset from the center. In the x-direction, one diode (4 pixels) corresponds to 0.31". From the x- and y-offsets you can calculate the length of the required slew from original pointing guess to the estimated target position after acquisition. The statistical 1sigma uncertainty for ACQ/BIN is of order 0.1". If your ACQ/BIN spectrum has 11 groups, it has failed. An example of the calculation of y- and x-offsets sketched above is available as a cookbook. Peak-up Acquisition-ACO/PEAK As for ACQ/BIN, the first stage of a peak-up (or peak-down) target acquisition is normally done in three steps with the 4.3" aperture. The original target coordinates for the first stage are required to be accurate to about 1" so that the object will fall within the 4.3" aperture. After the first stage, which is a 1 x 3 step pattern in the y-direction, just as in ACQ/BIN, the subsequent stages of a peak-up sequence map the third of the aperture in which the target was found (i.e., the highest count rate measured). The second stage, in a 2 x 6 step pattern, using the 1.0" aperture, traces the location of the source within the 1.3- x 3.7- area where it was found in the first stage. This narrows the area in which the target is located to the surface area of the 1.0" aperture (one out of 12 spectra will have the most counts). The third stage of a peak-up sequence is normally a 3 x 3 point scan of the surface of the 1.0" aperture, now using the 0.5" aperture with a step size of about 0.3". This will lead to a pointing accuracy of about 0.2". If higher accuracy is needed, the surface area of the 0.5" aperture has to be scanned with the 0.3" aperture, possibly in two steps with decreasing step sizes. In order to convert FOS x,y coordinates into sky coordinates, you need to know the position angle of the aperture, which is documented in the header keyword PA - APER (see Chapter 5 of this book). Much information on target acquisition is given in the FOS Instrument Handbook. Additional details, such as the interpretation of how the telescope moves in a sky coordinate system when centering on the position of the highest observed count rates in the different acquisition stages, are available as a cookbook. Science Observations After checking the target acquisition you will want to know whether your integration time requirements were actually fulfilled. The exposure log sheet (exposure number 4 in Figure 13.1) will only give a first estimate. The actual on-source integration times are stored in the headers of your science data (e.g., the .c1h and .c0h files), in the header keyword EXPTIME. If this should be much shorter than you planned, check whether the observation was interrupted by an earth occultation. In this case, the integration will have been resumed after re-acquisition of the guide stars after the occultation (Figure 13.1, exposure 6). In the example shown in Figure 13.1, the actual integration time with the G270H grating was 1190 seconds, filling the orbit up to the next earth occultation, plus 790 seconds in the next orbit. In standard FOS ACCUM mode spectra the last group will contain the accumulated total integration on the target. Using the task fwplot you can easily check whether the spectrum shows a flux that is roughly like what you wanted. If your spectrum was split up into two or more parts, for example because it was observed in more than one satellite orbit due to earth occultations, you will obtain the sum of all integrations by adding up the last groups of all spectra of your target taken with the same instrument configuration (combination of detector, aperture, and disperser) . In order to judge the quality of the spectra, including the quality of the calibration, the existence of artifacts, or the validity of specific features, you need detailed information on the single steps of the data calibration and on the instrument. Therefore, before we delve into more data analysis techniques, we describe in the next sections the data calibration, different checks you should perform, accuracies that you can expect for the FOS, and when and how to recalibrate your data, if necessary. Engineering Data As of February 1995, the engineering data and jitter files, which are provided by the Observatory Monitoring System (OMS), are also shipped to observers. These can give valuable information, such as telescope performance during the observation and the position of the target within the aperture. See Chapter 5 for a description of these files. ------------------------------------------------------------------------------ CHAPTER 14 Calibrating FOS Data In This Chapter... Pipeline Calibration Overview Input Files Reference Files Reference Tables Details of the FOS Pipeline Process Post-Calibration Output Files Polarimetric Calibration This chapter describes how data are processed in the standard pipeline calibration, through which all HST data pass. Included here will be details about each step in the calibration process, descriptions of the reference files and tables required by the calfos program, descriptions of the calibrated output files, and some information about polarimetric calibration. Pipeline Calibration Overview As we mentioned earlier, data shipped to investigators or retrieved from the HST Data Archive are calibrated. The primary task for the calibration of FOS science spectra in the Routine Science Data Processing (RSDP) system (the pipeline) is calfos. The description of pipeline data processing presented in this chapter will follow the way calfos works. We will describe each step, the files used in that step, created files, potential problems that might occur, and how these problems can be identified and corrected. Figure 14.1 and Table 14.1 serve as an overview for the following descriptions. Figure 14.1: Pipeline Processing by calfos Table 14.1: Calibration Steps and Reference Files for FOS Pipeline Processing Switch Processing Step Reference File ------------------------------------------------------------------------------ ERR-CORR Compute propagated error at each point in spectrum. Error file calibrated with science file and propagated statistical errors written to .c2h. CNT-CORR Convert from raw counts to count rates by dividing each ddthfile data point by exposure time and correcting for disabled diodes. Diode numbers are taken from ddthfile or from the unique data log. OFF-CORR Correct for image motion in the FOS X direction (dispersion) induced by magnetic field. Uses a model of ccs7 the earth's magnetic field along with scale factors from table ccs7. This step should be applied for observations taken before April 4, 1993, after which the on-board GIM correction is used. PPC-CORR Correct raw count rates for saturation in detector ccg2 electronics using paired-pulse correction table (coccgr2). BAC-CORR Correct for particle-induced background using default reference background (bachfile) if no background spectrum bachfile was obtained as part of the observation. GMF-CORR If BAC-CORR is set to PERFORM, and the default background ccs8 file (bachfile) is used, this file can be scaled to the expected mean count rate for the spacecraft geomagnetic position using the ccs8 reference table and subtracted from the count rate spectra by setting GMF - CORR to PERFORM. Scaled background is written to the .c7h file. SCT_CORR Remove background scattered light. The scattered light is ccs9 determined by calculating the mean value of diodes not illuminated by the selected grating. This mean is then subtracted from the observed spectrum. Un-illuminated diodes are found in the CCS9 reference table. FLT-CORR Correct for diode to diode sensitivity variations by f11hfile multiplying by the flatfield response file (fl 1hfile). f12hfile For paired aperture or spectropolarimetry observations, a second flatfield file is used. SKY-CORR If a sky spectrum was observed, the background is ccs0, subtracted and the sky smoothed using a median and mean ccs2, filter. Uses filter widths table (ccs3) aperture size ccs3, table (ccs0), emission line positions (ccs2), and sky ccs5 shift table (ccs5). WAV-CORR Compute vacuum wavelength scale for each object or sky ccs6 spectrum using coefficients (ccs6). APR_CORR Correct for relative aperture throughputs. Object data ccsa, are normalized to the reference aperture used to derive ccsb, the average inverse sensitivity used in AIS_CORR. This ccsc step is required if AIS_CORR is used. Object data are divided to correct for changes in aperture throughput due to changes in OTA focus. FLX_CORR Convert from count rate to absolute flux units by iv1hfile, multiplying by inverse sensitivity curve. Uses inverse iv2hfile sensitivity file (iv1hfile) or, for paired aperture or spectropolarimetry, file (iv2hfile). AIS_CORR Convert from count rate to absolute flux units by aishfile multiplying by inverse sensitivity curves. This step replaces FLX_CORR and is different in that an average inverse sensitivity, determined from calibration of all apertures, is used. APR_CORR must be performed for this step to have meaning. TIM_CORR Correct for changes in instrument sensitivity over time ccsd by dividing the object data by an appropriate correction factor. MOD-CORR Perform ground software mode dependent corrections for ccs4. time-resolved, rapid readout, or spectropolarimetry retjfile observations. For RAPID, write total flux and sum of statistical errors to groups 1 and 2 of .c3h file. For PERIOD mode, write pixel-by-pixel averages of all slices to groups 1 and 2 of .c3h file. For spectropolarimetry, data from individual waveplate positions is used to make Stokes parameters 1, Q, U, and V and linear and circular polarization position angle spectra. ------------------------------------------------------------------------------ For the different corrections described above, calfos uses three different types of input files: * Input data files: these are the observation data files, in Generic Edited Information Set (GEIS) format, i.e., a multi-group image. * Reference files (GEIS format images). * Reference tables (STSDAS tables). Input Files Table 14.2 lists the science files that are used as input to calfos. These files are described briefly below. Table 14.2: Observation Input Files for calfos File Extension File Contents ------------------------------------------------------------------------------ .shh and.shd Standard header packet .ulh and.uld Unique data log .d0h and.d0d Science data .q0h and.q0d Science data quality .x0h and.x0d Science header line .xqh and.xqd Science header line data quality .d1h and.d1d Science trailer line .q1h and.q1d Science trailer line data quality ------------------------------------------------------------------------------ Standard Header Packet The standard header packet (SHP) contains the telemetry values for engineering data and some FOS-unique data. The engineering data include temperatures, currents, and voltages at various points in the instrument. The FOS-unique data varies depending on the onboard processing used for a given observation. The header packet also contains information used in the operation of the spacecraft, such as target name, position and velocity of the telescope, the right ascension and declination of the target, the sun, and the moon, and other proposal information used in the observation which was provided in phase II of the proposal process. The SHP files are identified by the extensions shh and .shd. Unique Data Log The unique data log (UDL) contains the mechanism control blocks used to control the entrance aperture, entrance port, polarizer, and filter grating wheel assembly. This file also contains the discriminator level, disabled diode table, serial engineering data, instrument configuration, and exposure parameters. The UDL files are identified by the extensions .ulh and .uld. Science Data Files Science data files contain single-precision floating point values that represent the number of detected counts accumulated in each pixel. The number of data elements in the one-dimensional science data array depends on the observation mode. Specifically, the number of diodes, the number of substeps, the number of Y steps, and the number of repeats (sometimes called slices or bins) used in the observation. The maximum number of data elements is 12288. The associated header file also provides information on the different steps to be performed during pipeline calibration processing, and the reference files and tables to be used in the calibration. The uncalibrated science data files are identified by the extensions .d0h and.dod. Science Header Line The science header line (SHL) file is a one dimensional array with a length equal to a line of the science data. It contains a partial copy of the unique data log. The SHL files are identified by the extensions .x0h and .x0d. Science Trailer Line The science trailer line (STL) file is also a one dimensional array containing the number of measurements rejected from the various combinations of X substeps, Y steps, repeats, etc. The rejection threshold is given in the unique data log header file under the keyword YNOISELM. The information in these files is used to compute the total effective exposure time per pixel which is later used to convert the counts into count rates. The STL files are identified by the extensions .d1h and.d1d. Data Quality Files The science data files, science header line files, and the science trailer files have corresponding data quality files that contain the flags for bad or suspect data. These raw data quality files have quality flags as follows: * Good data has the data quality flag=l. * Raw data drop outs and filled raw data have the data quality flag=16. * Data failing a Reed-Solomon error check has the data quality flag=100. The data quality files are identified by the extensions q0h, q0d, xqh, xqd, .q1h, and q1d corresponding to the science data, science header, and science trailer files. Reference Files The reference files and tables are typically referred to by the name of the Calibration Data Base System (CDBS) reference relation that holds their names. The extensions of the reference tables and files are of the form cyn, rnh and rnd where n represents a value from 0 to 9 and A to D (see Table 14.3). These files are maintained in the CDBS by STScI. STEIS maintains an updated catalog of these tables and files. In Chapter 4 we describe how to obtain these files. Except for some spectropolarimetric reference files, which are twice this length (for two pass directions), all reference files contain a vector of length: ( N_chan + N_over - 1 ) X N_x Where * N_chan is the number of channels observed (keyword NCHNLS). * N_over is the number of channels multiplexed (keyword OVERSCAN). * N_x is the number of substeps (keyword NXSTEPS). Although the reference files can be generated for any combination of NXSTEPS, FCHNL (first channel), NCHNLS, and OVERSCAN, the routine calibration reference files have a length of 2064 pixels, corresponding to the standard keyword values: * NXSTEPS = 4 * FCHNL = 0 * NCHNLS = 512 * OVERSCAN = 5 For other values of the above keywords calfos interpolates from the standard reference files. In some cases (non-standard NXSTEPS and OVERSCAN), the inverse sensitivity files and flatfields must be resampled before calfos will run correctly. Reference Tables The CDBS relations for the FOS reference files and reference tables are described below. Note that these are relations that point to the reference files and tables: they do not contain the data used. * cyccs0r: This table is used to determine the aperture area for paired apertures. If STEP-PATT=STAR-SKY is used, it is only for sky subtraction. * cyccs1r: This table is used to determine which aperture (UPPER or LOWER) of a paired aperture was used for observing an object or sky spectrum. Table 14.3: Reference Tables and Files Required by calfos Header Data Base Filename File Contents Keyword Relation Extension ------------------------------------------------------------------------------ CCSo cyccsor .cy0 Aperture areas CCS1 cyccslr .cyl Aperture positions CCS2 cyccs2r .cy2 Sky emission line positions CCS3 cyccs3r .cy3 Sky and background filter widths CCS4 cyccs4r .cy4 Polarimetry parameters CCS5 cyccs5r .cy5 Sky shift parameters CCS6 cyccs6r .cy6 Wavelength dispersion coefficients CCS7 cyccs7r .cy7 GIM correction scale factors CCS8 cyccs8r .cy8 Predicted background (count rate) CCS9 cyccs9r .cy9 Un-illuminated diodes for scattered light correction CCSA cyccsar .cya OTA focus positions for aperture throughputs CCSB cyccsbr .cyb Aperture throughput coefficients CCSC cyccscr .cyc Throughput corrections versus focus CCSD cyccsdr .cyd Instrument sensitivity throughput correction factors CCG2 coccg2r .cmg Paired-pulse coefficients BACHFILE cybacr .r0h & .r0d Default background file (count rate) FLnHFILE cyfltr .r1h & .r1d Flatfield file IVnHFILE cyivsr .r2h & .r2d Inverse sensitivity file (ergs cm^2 A^-1 count^-1 diode)^a RETHFILE cyretr .r3h & .r3d Retardation file for polarimetry data DDTHFILE cyddtr .r4h & .r4d Disabled diode file DQnHFILE cyqinr .r5h & .r5d Data quality initialization file AISHFILE cyaisr .r8h & .r8d Average inverse sensitivity file ------------------------------------------------------------------------------ a. Note that all references to inverse sensitivity, IVS, in the version 6 FOS Instrument Handbook contain the per diode component of this definition implicitly. The meaning of IVS is identical in both this document the FOS Instrument Handbook. * cyccs2r: Regions of the sky spectrum known to have emission lines. These regions are not smoothed before the sky is subtracted from the object spectrum. The cyccs2r table values have not been confirmed after science verification (SV) It does not affect any data reduction step since there have been no GO sky observations. * cyccs3r: Filter widths used for smoothing the sky or background spectra. * cyccs4r: Polarimetry information regarding waveplate pass direction angles, initial waveplate position angles, the pixel number at which the wavelength shift between the two pass directions is to be determined for computing the merged spectrum, and the phase and amplitude coefficients for correction of polarization angle ripple. * cyccs5r: The shift in pixels to be applied to the sky spectrum before subtraction. * cyccs6r: Dispersion coefficients to generate wavelength scales. There is one entry for each detector, disperser, aperture, and polarizer combination. * cyccs7r: GIM correction scale factors used to scale the modeled shift of the spectrum due to the earth's magnetic field. * cyccs8r: Predicted background count rates as a function of geomagnetic position used to scale the background reference file. * cyccs9r: Un-illuminated diode ranges for each detector and grating combination. Used to determine the background scattered light. * cyccsar: List of OTA focus positions versus time. Used to correct for aperture sensitivity dependent on focus position. * cyccsbr: Coefficients to normalize aperture throughputs to the reference aperture used to determine the average inverse sensitivity calibration. * cyccscr: Throughput corrections versus focus. * cyccsdr: Throughput correction factors to account for changes in instrument sensitivity over time. * coccg2r: Paired pulse correction table used to correct for non-linear response of the diode electronics. Both detectors have the same correction constants, which are time independent. * cybacr: This relation is for the background reference files. For each detector there is one file that is used as a default background count rate in the event no background spectra were observed. * cyfltr: This relation is for the flatfield reference files. These files are used to remove the small scale diode and photocathode non-uniformities. There is one file for each detector, disperser, aperture, and polarizer combination. * cyivsr: This relation is for inverse sensitivity reference files. These files are used to convert corrected count rates to absolute flux units. There is one file for each detector, disperser, aperture and polarizer combination. The best inverse sensitivity file suitable for a given observation can be found using the StarView calibration screens (see "Tutorial: Retrieving Calibration Reference Files" on page 103) or by checking the information on inverse sensitivity files on STEIS which are updated with each delivery of new files. Figures 14.2 and 14.3 show the pre-COSTAR inverse sensitivity for the most commonly used gratings for both detectors. The new post-COSTAR sensitivity curves are plotted in Figures 14.4 and 14.5. Figure 14.2: Pre-COSTAR Inverse Sensitivity Reference Files for Blue High Figure 14.3: Pre-COSTAR Inverse Sensitivity Reference Files for Red High Dispersion Gratings Figure 14.4: Cycle 4 Post-COSTAR Sensitivity Curves for High Dispersion Gratings Figure 14.5: Cycle 4 Post-COSTAR Sensitivity Curves for Low Dispersion Gratings * cyretr: This relation is for retardation reference files used for spectropolarimetric data. The files are used to create the observation matrix f(w). There is one file for each detector, disperser, and polarizer combination. The three available retardation files for the blue detector and waveplate B are plotted in Figure 14.6 with the appropriate grating shown. Figure 14.6: Retardation Reference Files * cyddtr: This is the relation for the disabled diode files. The table is used only if the keyword DEFDDTBL = F in the .d0h file. The disabled diode information is also contained in the ulh file. The disabled diode table is updated and information on this is found on STEIS. The total number of disabled blue diodes is 26 and disabled red diodes is 15, as of June 1995. Note that the diodes in the Tables 14.4 and 14.5 are numbered such that the first diode in the diode array is 0 and the last diode is 511. For use in IRAF and STSDAS, the diode number would be the diode number in the table + 1. * cyaisr: This is the relation for the average inverse sensitivity reference files. These files are used to convert corrected count rates to absolute flux units. There is one file for each detector and disperser combination. Table 14.4: Blue Detector Disabled Diodes as of August 1995 DISABLED DISABLED DISABLED ENABLED But Dead Channels Noisy Channels Cross-Wired Possibly Noisy Channels ------------------------------------------------------------------------------ 49 31 47 8 101 73 55 138 223 144 139 284 201 209/210 292 218 381 409 225 421 441 235 426 471 241 268 398 415 427 451 465 472 497 8 16 2 7 ------------------------------------------------------------------------------ * cypsf: This is the relation for the monochromatic pre-COSTAR point spread functions for the FOS, covering the wavelength range 1200-5400 A for the blue side and 1600-8400 A for the red side. These PSFs were modeled using the TIM software. In Figure 14.7, a sample blue side FOS PSF at 1400 A is shown. Table 14.5: Red Detector Disabled Diodes as of August 1995 DISABLED DISABLED ENABLED But Dead Channels Noisy Channels Possibly Noisy ------------------------------------------------------------------------------ 2 110 97 6 189 114/115 29 285 116 197 380 142 212 381 153 308 405 174 486 409 225 412 258/259 261 285 289 410 7 8 14 ------------------------------------------------------------------------------ Figure 14.7: Example of a Pre-COSTAR Point Spread Function for the FOS * cylsf: This is the relation for the monochromatic pre-COSTAR line spread functions for all of the non-occulting FOS apertures computing using the PSFs in cypsf. Figure 14.8 shows a sample monochromatic FOS LSF for the blue side 4-.3 aperture. LSFs are available at each PSF wavelength. Figure 14.8: Example of a Pre-COSTAR Line Spread Function for the FOS * cyqinr: This is the relation for the data quality initialization files. These files are used to flag intermittent or noisy diodes, but have not been kept up to date. Details of the FOS Pipeline Process This section describes in detail the pipeline calibration (calfos) procedures. Each step of the processing is selected by the values of keyword switches in the science data header file. All FOS observations undergo pipeline processing to some extent. Target acquisition and IMAGE mode data are processed only up to step 6 (paired pulse correction) but are not GIM corrected. ACCUM data are processed up to step 14 (absolute flux calibration) and RAPID, PERIOD, and POLARIMETRY data are processed up to step 15 (special mode processing). The steps in the FOS calibration process are: 1. Read the raw data. 2. Calculate statistical errors (ERR-CORR). 3. Initialize data quality. 4. Convert to count rates (CNT-CORR). 5. Perform GIM correction (OFF-CORR). 6. Do paired-pulse correction (PPC-CORR). 7. Subtract background (BAC_CORR). 8. Subtract scattered light (SCT_CORR). 9. Do flatfield correction (FLT_CORR). 10. Subtract sky (SKY_CORR). 11. Correct aperture throughput and focus effects (APR_CORR). 12. Compute wavelengths (WAV_CORR). 13. Correct time-dependent sensitivity variations (TIM_CORR). 14. Perform absolute calibration (FLX_CORR); superseded by AIS_CORR. 15. Do special mode processing (MOD_CORR). These steps are described in detail in the following sections. A basic flowchart is provided in Figure 14.1 on page 222. Note that AIS_CORR overrides FLX_CORR, if both are set to PERFORM. Reading the Raw Data The raw data, stored in the .d0h file, are the starting point of the pipeline data reduction and calibration process. The raw science data are read from the .d0h file and the initial data quality information is read from the .q0h file. If science trailer (.d1h) and trailer data quality (.q1h) files exist, these are also read at this time. Calculating Statistical Errors (ERR-CORR) The noise in the raw data is photon (Poisson) noise and errors are estimated by simply calculating the square root of the raw counts per pixel. An error value of zero is assigned to filled data, i.e., pixels that have a data quality value of 800. For all observing modes except polarimetry, an error value of zero is also assigned to pixels that have zero raw counts. Polarimetry data that have zero raw counts are assigned an error value of one. From this point on, the error data are processed in lock-step with the spectral data, except that errors caused by sky and background subtraction, as well as those from flatfields and inverse sensitivity files, are ignored. At the end of the processing, the calibrated error data will be written to the .c2h file. Data Quality Initialization The starting point of the data quality information is the data quality values from the spacecraft as recorded in the .q0h file. This step of the processing adds values from the data quality reference files to the initial values in the .q0h file. The routine uses the data quality initialization reference file DQ I HFILE listed in the .d0h file. A second file, DQ2HFILE, is necessary for paired-aperture and spectropolarimetry observations. These reference files contain flags for intermittent noisy and dead channels (data quality values 170 and 160, respectively). The data quality values are carried along throughout the remaining processing steps where subsequent routines will add values corresponding to other problem conditions. Only the highest (most severe) data quality value is retained for each pixel. At the end of the processing the final data quality values will be written to the .cqh file. The noisy and dead channels in the data quality files could be out of date, but the dead diode tables have the most up-to-date list of dead and noisy diodes. Conversion to Count Rates (CNT - CORR) At this step, the raw counts per pixel are converted to count rates by dividing by the exposure time of each pixel. Filled data (data quality = 800) are set to zero. A correction for disabled diodes is also included at this point. If the keyword DEFDDTBL in the .d0h file is set to TRUE, the list of disabled diodes is read from the unique data log (.ulh) file. Otherwise the list is read from the disabled diode reference file, DDTHFILE, named in the .d0h file. The DDTHFILE is more commonly used for the disabled diode information. The actual process by which the correction for dead diodes is accomplished is as follows. First, recall that because of the use of the OVERSCAN function, each pixel in the observed spectrum actually contains contributions from several neighboring diodes (see "Science Data Acquisition" on page 206 for more details). Therefore, if one or more particular diodes out of the group that fed a given output pixel is dead or disabled, there will still be some amount of signal due to the contribution of the remaining live diodes in the group. Therefore we can correct the observed signal in that pixel back to the level it would have had if all diodes were live; to do this, we divide by the relative fraction of live diodes. The corrected pixel value is zero if all the diodes that contribute to that pixel are dead or disabled, otherwise, the value is given by the equation: corr = obs x ( total/(total - dead) ) Where: * corr - is the corrected pixel value. * obs - is the observed pixel value. * total - is the total number (live + dead) of diodes. * dead - is the number of dead or disabled diodes. This correction (to the signal and its associated error) is applied at the same time the raw data are divided by exposure time. If the fraction of dead diodes for a given pixel exceeds 50 percent, then a data quality value of 50 is assigned. If all of the diodes for a given pixel are dead, both the data and error values are set to zero and a data quality value of 400 is assigned. The count rate spectral data are written to the .c4h file at this point. Note that the S/N in a given pixel is appropriate to the actual observed count rate. GIM Correction (OFF-CORR) Data obtained prior to April 5, 1993, do not have an onboard geomagnetic-induced image motion (GIM) correction applied, and therefore require a correction for GIM in the pipeline calibration. Note that there are some observations obtained after April 5, 1993, that do not have onboard GIM correction, because the application of the onboard GIM correction depended on when the proposal was completely processed. The GIM correction is determined by scaling a model of the strength of the geomagnetic field at the location of the spacecraft. The model scale factors are read from the CCS7 reference table. The correction is applied to the spectral data, the error data, and the data quality values. A unique correction is determined for each data group based on the orbital position of the spacecraft at the mid-point of the observation time for each group. While the correction is calculated to sub-pixel accuracy, it is applied as an integer value and is therefore accurate only to the nearest integral pixel. This is done to avoid resampling the data thereby losing information. Furthermore, the pipeline correction is applied only in the x direction (i.e., along the diode array). The correction is applied by simply shifting pixel values from one array location to another. As a typical example, if the amount of the correction for a particular data group is calculated to be +2.38 pixels, the data point originally at pixel location 1 is shifted to pixel 3, pixel 2 shifted to pixel 4, pixel 3 to pixel 5, and so on. Pixel locations at the ends of the array that are left vacant by this process (e.g., pixels 1 and 2 in the example above) are set to a value of zero and are assigned a data quality value of 700. Special handling is required for data obtained in ACCUM mode since each data frame contains the sum of all frames up to that point. In order to apply a unique correction to each frame, data taken in ACCUM mode are first unraveled into separate frames. Each frame is then corrected individually, and the corrected frames are recombined. Target acquisition data, image mode data, and polarimetry data are not GIM corrected during the pipeline processing. The onboard GIM correction is applied on a finer grid and in both the direction of the diode array and in the perpendicular direction. In the direction of the diode array the onboard GIM correction is applied in units of 1/32 of the width of the diodes and in units of 1/256 of the diode height in the direction perpendicular to the diode array. The onboard GIM correction is calculated and applied every 30 seconds, and is applied to all observations except for ACQ/PEAK observations. Paired Pulse Correction (PPC_CORR) This step corrects the data for saturation in the detector electronics. The dead time constants qo, q1, and F are read from the reference table CCG2. Currently the values of these dead time constants in the CCG2 table are qo = 9.62e-6 seconds, q1= 1.826e-10 sec^2/counts, and F = 52,000 counts per second. The following equation is used to estimate the true count rate: X = Y/(1 - yt) Where: * x - is the true count rate. * y - is the observed count rate. * t - is qo for y less than or equal to F. * t - is qo + q1 * (y-F) for y greater than F. Currently the values of these different saturation limits in the CCG2 table are as follows: * Observed count rates greater than the saturation limit of 57,000 counts per second (and recorded in the calfos processing log) are set to zero and assigned a data quality value of 300. * All observed count rates that are between this severe saturation limit and 10 counts/second are corrected, but those lying between the predefined limits of large (55,000 counts/second) and severe saturation (57,000 counts/second) are assigned a data quality value of 190. * Those that lie between the limits of moderate (52,000 counts/second) and large (55,000 counts/second) saturation are assigned a data quality value of 130, and the paired pulse correction is applied. * Count rates between the threshold value (10 counts/second) and 52,000 counts/second have the paired pulse correction applied. * Data with count rates below this threshold value (10 counts/second) do not have any paired-pulse correction. Background Subtraction (BAC_CORR) This step subtracts the background (i.e., the particle-induced dark current) from object and sky (if present) spectra. If no background spectrum was obtained with the observation, a default background reference file, BACHFILE, which is scaled to a mean expected count rate based on the geomagnetic position of the spacecraft at the time of the observation, is used. The scaling parameters are stored in the reference table CCS8. The scaled background reference spectrum is written to the .c7h file for later examination. If an observed background is used, it is first repaired; bad points (i.e., points at which the data are flagged as lost or garbled in the telemetry process) are filled by linearly interpolating between good neighbors. Next, the background is smoothed with a median filter, followed by a mean filter. The median and mean filter widths are stored in reference table CCS3. No smoothing is done to the background reference file, if used, since the file is already a smoothed approximation to the background. Spectral data at pixel locations corresponding to repaired background data are assigned a data quality value of 120. Finally, the repaired background data are subtracted. Although this is called background subtraction, it is really a dark count subtraction. Scattered Light Correction (SCT_CORR) This step removes scattered light present in the object data for certain detector and grating modes. Some detector and grating combinations do no fully illuminate all the science diodes. For these combinations, the dark diodes can be used to measure the scattered light illuminating the diodes. For the valid combinations, the average count rate for these diodes are determined and subtracted from the whole data array, including the dark pixels. The calfos task will tell you (via the standard output) if it performs this step, along with the subtracted value; this information is also available in the trailer file if you have a dataset from the pipeline. Flatfield Correction (FLT_CORR) This step removes the diode-to-diode sensitivity variations and fine structure (typically on size scales of ten diodes or less) from the object, error, and sky spectra by multiplying each by the inverse flatfield response as stored in the FLIHFILE reference file. A second flatfield file, FL2HFILE, is required for paired aperture or spectropolarimetry observations. No new data quality values are assigned in this step. Sky Subtraction (SKY_CORR) If the sky was observed, the flatfielded sky spectrum is repaired in the same fashion as described above for an observed background spectrum. The spectrum is then smoothed once with a median filter and twice with a mean filter, except in regions of known emission lines, which are masked out. The CCS2 reference table contains the pairs of starting and ending pixel positions for masking the sky emission lines. The sky spectrum is then scaled by the ratio of the object and sky aperture areas, and then shifted in pixel space (to the nearest integer pixel) so that the wavelength scales of the object and sky spectra match. The sky spectrum is then subtracted from the object spectra and the resulting sky-subtracted object spectrum is written to the .c8h file. Pixel locations in the sky-subtracted object spectrum that correspond to repaired locations in the sky spectrum are assigned a data quality value of 120. This routine requires table CCS3 containing the filter widths, the aperture size table CCS0, the emission line position table CCS2, and the sky shift table CCS5. This observation mode has never been used as half the integration time must be spent on the sky. Since there have been no GO science observations for the sky, the CCS2 table values have not been confirmed. Note that--especially for extended objects-paired aperture observations can be obtained in the so-called "OBJ-OBJ" mode, in which no sky subtractions are performed. Computing the Wavelength Scale (WAV_CORR) A vacuum wavelength scale is computed for each object or sky spectrum. Wavelengths are computed using dispersion coefficients corresponding to each grating and aperture combination stored in reference table CCS6. The computed wavelength array is written to the .c0h file. For the gratings the wavelengths are computed as follows: 3 lambda(A) = SUM from p=0 to 3 of [ l(p) X x^p ] For the prism, wavelengths are computed as: lambda(A) = SUM from p=0 to 4 of [ l(p)/(x-xo)^p ] Where: * l(p) - are the dispersion coefficients in table CCS6. * x - is the position (in diode units) in the object spectrum, where the first diode is indexed as 0. * xo - is a scalar parameter also found in table CCS6. Note that the above equations determine the wavelength at each diode. This must be converted to pixels using NXSTEPS. For example, if NXSTEPS=4, the values for x are given as 0, 0.25, 0.5, 0.75, 1, etc., for pixels 1, 2, 3, 4, 5, etc. For multigroup data, as in either rapid-readout or spectropolarimetry mode, there are separate wavelength calculations for each group. These wavelengths may be identical or slightly offset, depending on the observation mode. Aperture Throughput Correction (APR_CORR) This calibration step consists of two parts: normalizing throughputs to a reference aperture and correcting throughputs for focus changes. Both steps are relevant only if the average inverse sensitivity files are used, see AIS_CORR. Each aperture affects the throughput of light onto the photocathode. To prepare the object data for absolute flux calibration, the object data must be normalized to the throughput as would be seen through a predetermined reference aperture. The normalization is calibrated as a second-order polynomial and is a function of wavelength. The polynomial is evaluated over the object's wavelength range and divided into the object data. The coefficients are found in the CCSB reference table. Once the object data has been normalized, the throughput compensates for variations in sensitivity due to focus changes. The CCSA table contains a list of dates and focus values. The sensitivity variation is modeled as a function of wavelength and focus, the coefficients of which are found in the CCSC table. This model is evaluated and divided into the object data. Absolute Flux Calibration This step multiplies object (and error) spectra by the appropriate inverse sensitivity vector to convert from count rates per diode to absolute flux units (erg s^-1 cm^-2 A^-1). Two different methods of performing this calibration have been used. The pipeline has used the so-called FLX_CORR method from the time of HST launch through 1995. The pipeline processing method, for non-polarimetric observations, is expected to be changed to the so-called AIS_CORR method in 1996. AIS_CORR is also available, and the preferred method, to re-calibrate any future non-polarimetric FOS observations. Spectropolarimetry will continue to be processed via the FLX_CORR method. FLX_CORR: The inverse sensitivity data are read from the IV1HFILE reference file. A second inverse sensitivity file, IV2HFILE, is required for paired-aperture or spectropolarimetry observations. Points where the inverse sensitivity is zero (i.e., not defined) are flagged with a data quality value of 200. The calibrated spectral data are written to the .c1h file, and the calibrated object data are written to the .c2h file. The final data quality values are written to the .cqh file. AIS_CORR: This step is functionally no different than FLX_CORR except for the way in which absolute flux is calibrated. The absolute flux calibration is based on data from all apertures and averaged, or normalized, to an arbitrary reference aperture, using the inverse sensitivity information from the AISHFILE reference file. The APR_CORR step must be performed in order for AIS_CORR to have any meaning. For both methods, points where the inverse sensitivity is zero (i.e., not defined) are flagged with a data quality value of 200. The calibrated spectral data are written to the c1h file, and the calibrated object data are written to the .c2h file. The final data quality values are written to the .cqh file. This is the final step of processing for ACCUM mode observations. Time Correction (TIM_CORR) This step corrects the absolute flux for variations in sensitivity of the instrument over time. The correction factor is a function of time and wavelength. The factor is calculated by linear interpolation for the observations' time and wavelength coverage. Then the factor is divided into the object absolute flux. The coefficients are found in table CCSD. Special Mode Processing (MOD_CORR) Data acquired in the rapid-readout, time-resolved, or spectropolarimetry modes receive specialized processing in this step. All data resulting from this additional processing are stored in the .c3h file. See "Science Data Acquisition" on page 206 for details of how the output data are stored. RAPID Mode: For the RAPID mode, the total flux, integrated over all pixels, for each readout is computed. The sum of the statistical errors in quadrature for each frame is also propagated. The following equations are used in the computation. sum(F) = [ sum for x = 1 to NDAT of f(x,F) ] times NDAT/good errsum(F) = square root[[ sum for x =1 to NDAT of ef^2(x,F) ] times NDAT/good] Where: * f(x,F) - is the flux in pixel x and readout F. * ef(x,F) - is the associated error in the flux for pixel x and readout F * sum(F) - is the total flux for readout F * errsum(F) - is the associated error in the total flux for the readout F * NDAT - is the total number of pixels in the readout F * good - is total number of good pixels, i.e., pixels with data quality less than 200. The output .c3h file contains two data groups, where the number of pixels in each group is equal to the number of original data frames. Group 1 contains the total flux for each frame, where pixel 1 is the sum for frame 1, pixel 2 the sum for frame 2, etc. Group 2 of the .c3h file contains the corresponding propagated errors. PERIOD Mode: For the PERIOD mode, the pixel-by-pixel average of all slices (NSLICES separate memory locations) and the differences from the average for each slice of the last frame are computed. The following equations are used in the computation: average(x) = [ sum for L =1 to NSLICES of f(x,L) ] times good(x) errave(x) = square root[ sum for L= 0 to NSLICES-1 of (ef(x,l))^2 ] / good(x) diff(x,L) = f(x,L) - average(x) errdiff(x,L) = square root[ (ef(x,L)^2 + errave(x)^2) ] Where * NSLICES - is the number of slices. * f(x,L) - is the flux in slice L at pixel x. * ef(x,L) - is the error associated with the flux in slice L at pixel x. * average(x) - is the average flux of all slices at the pixel x. * errave(x) - is the error associated with the average flux at pixel x. * good(x) - is the total number of good values, i.e., data quality < 200, accumulated at pixel x. * diff(x,L) - is the flux difference at pixel x between slice L and the average. * errdiff(x,L) - is the error associated with the flux difference. The first two data groups of the output .c3h file contain the average flux and the associated errors, respectively. Each subsequent pair of data groups contains the difference from the average and the corresponding total error for each slice. POLARIMETRY Mode: For the POLARIMETRY mode, the data from individual waveplate positions are combined to calculate the Stokes 1, Q, U, and V parameters, as well as the linear and circular polarizations and polarization position angle spectra (for details of calculating the Stokes parameters see FOS Instrument Science Report 078). Four sets of Stokes parameter and polarization spectra are computed. The first two sets are for each of the separate pass directions, the third for the combined pass direction data, and the fourth for the combined data corrected for interference and instrumental orientation. Scattered Light Correction Scattered light observed in FOS data is produced by the diffraction patterns of the FOS gratings, the entrance apertures, and the micro-roughness of the gratings (extensive work discussed in FOS Instrument Science Report 114). A routine pipeline calibration correction is applied only for those gratings that have regions of zero sensitivity to dispersed light (Table 14.6). The values listed in the table apply to spectra with FCHNL=0, NCHNLS=512, NXSTEPS=4, and OVERSCAN=5, i.e., the default FOS observing mode. The correction applied in this way is only a wavelength independent first-order approximation. For details of the correction please see FOS Instrument Science Report 103. Note that the scattered light correction is in addition to the background subtraction. Table 14.6: Regions Used for Scattered Light Subtraction Detector Grating Minimum Maximum Total Pixel Pixel Pixels Number Number ------------------------------------------------------------------------------ Blue G130H 31 130 100 Blue G160L 901 1200 300 Blue Prism 1861 2060 200 Red G190H 2041 2060 20 Red G780H 11 150 140 Red G160L 601 900 300 Red G650L 1101 1200 100 Red Prism 1 900 900 ------------------------------------------------------------------------------ Since the scattered light characteristics of the FOS are now well understood, a scattered light model is available at STScI. It will be made available shortly for use as a post-observation parametric analysis tool (bspec) in STSDAS to estimate the amount of scattered light affecting a given observation. The program was developed by M. Rosa (ESO, ST-ECF), FOS Instrument Science Report 127. The amount of scattered light depends on the spectral energy distribution across the whole detector wavelength range of the object being observed and on the sensitivity of the detector. For cool objects the number of scattered light photons can dominate the dispersed spectrum in the UV. Thus, in order to model the scattered light in the FOS appropriately, the red part of the source spectrum has to be known. Post-Calibration Output Files Several types of calibrated output files are produced by calfos. These are listed in Table 14.7. More extensive descriptions of each type of file are provided below. Table 14.7: Output Calibrated FOS Data Files Filename Extension File Contents ------------------------------------------------------------------------------ .c0h and .c0d Calibrated wavelengths .c1h and .c1d Calibrated fluxes .cqh and .cqd Calibrated data quality .c2h and .c2d Calibrated statistical error .c3h and .c3d Special mode data .c4h and .c4d Count rate object and sky spectra .c5h and .c5d Flatfielded object count rate spectrum .c6h and .c6d Flatfielded sky count rate spectrum .c7h and .c7d Background count rate spectrum .c8h and .c8d Flatfielded and sky subtracted object count rate spectrum ------------------------------------------------------------------------------ The calibrated outputfiles listed in Table 14.7 include: * Calibrated wavelength files: These files contain single-precision floating point calibrated vacuum wavelengths corresponding to the center of each pixel of the science data. The files are identified by the extensions .c01 and .c0d. * Calibrated flux files: These files contain single-precision floating point calibrated fluxes corresponding to each pixel of the science data. The files are identified by the extensions .c1h and .c1d. * Calibrated data quality files: The quality flags in these files flag the bad pixel values in the calibrated files. The quality flags from the raw data are updated and additional flags are added for problems detected in the calibration process. The data quality flags are defined in Table 17.3 on page 288. The data quality files are identified by extensions .cqh and .cqd. * Calibrated statistical error files: These files contain the statistical errors of the original data values. Further, these files are calibrated in lock-step with the science data files. Errors caused by sky and background subtraction, flatfields, and inverse sensitivity files are not calculated and updated. The error files are identified by extensions .c2h and .c2d. * Special mode data files: Data acquired in the rapid-readout, time-resolved, or spectropolarimetry modes require processing steps in addition to (or complementing) those used for standard ACCUM data. The calibrated data are then stored in special mode data files. For the RAPID mode, the files contain the total flux, integrated over all pixels, and the associated statistical error for each readout. For the TIME RESOLVED mode, the files contain the pixel-by-pixel average of all slices or bins, the difference between each slice or bin and the average, and the average propagated statistical errors. For the POLARIMETRY mode, the file contains the Stokes I, Q, U, and V parameters, the linear and circular polarization, and the polarization position angle. The polarimetric quantities and the propagated errors are calculated for each of the separate pass directions, the combined pass direction data, and the combined pass direction corrected for interference and instrumental orientation (see below). The special mode data files are identified by the extensions .c3h and .c3d. * Intermediate calibrated output files: At most, six sets of intermediate calibrated output files are produced depending on the observation mode. The files containing the count rate spectra are corrected for undersampling caused by disabled diodes, overscanning, and noise rejection. These files are identified by the extensions .c4h and c4d. The flatfielded object spectrum files are identified by the extensions .c5h and .c5d. The flatfielded sky spectrum files are produced only if a sky observation was obtained. These files are identified by the extensions .c6h and .c6d. The background spectrum is identified by the extensions .c7h and .c7d. If the sky is observed, then a smoothed-sky subtracted object spectrum prior to flux calibration is produced. The files containing the smoothed-sky subtracted object spectrum are identified by the extensions .c8h and .c8d. If the reference files and reference tables used in the pipeline processing do not reflect the actual instrument performance, calibration errors can occur, leading to artificial features in the calibrated science data. In "FOS Error Sources" on page 251, we list some prominent examples for such errors. Polarimetric Calibration The group contents of the raw (.d0h) data file are shown in Table 14.8. Note that the number of pixels in each group is twice the number of pixels in a single spectrum as there are two spectra appended together, one for each pass direction. Once again the number of pixels in the spectrum depends on the values of NXSTEPS and OVERSCAN used (see ACCUM mode for details). The organization of calibrated polarimetry data files differs from the raw data files and calibrated data taken in other observing modes in that the two pass direction spectra from each readout are stored in separate data groups instead of being concatenated within one group. The wavelength arrays for the different POLSCAN positions should be identical (rotating the waveplate does not change the wavelengths), but the wavelengths are offset by a constant amount between the two pass directions. The calibrated fluxes, the corresponding statistical errors, and the data quality are stored in 2 x POLSCAN number of groups, similar to the wavelengths. Note that for polarimetry data the statistical errors cannot be combined simply. The errors in the Stokes parameters are calculated separately by the data reduction pipeline. The polarimetry-specific data are stored once again as groups in a separate file.3 Table 14.8: Group Contents of Raw Polarimetry Science Data Files Group # Contents ------------------------------------------------------------------------------ 1 Polscan 1: pass direction 1 and pass direction 2 2 Polscan 2: pass direction 1 and pass direction 2 3 Polscan 3: pass direction 1 and pass direction 2 . . . 15 Polscan 15: pass direction 1 and pass direction 2 16 Polscan 16: pass direction 1 and pass direction 2 ------------------------------------------------------------------------------ The .c0h file is a dataset with 2 x POLSCAN groups with wavelengths for both pass directions through the Wollaston prism and each POLSCAN position. Note that the wavelengths for the different POLSCAN positions should be identical, as mentioned earlier, but the wavelengths are offset between the two pass directions by a constant amount. The .c1h file is a dataset with 2 x POLSCAN groups containing calibrated flux for both pass directions and each POLSCAN position. The calibrated fluxes are stored in exactly the same way as the wavelengths. Note that unlike calibrated flux data for non-polarimetric observations, the first group will not represent the absolute flux for the source, but only half, since the light was split into 2 spectra by the polarizer. Representative fluxes are formed by averaging the fluxes from the complete set of POLSCAN positions for each pass direction separately, and then summing the two. Since there is a wavelength shift between the spectra from the two pass directions, to combine the two mean spectra from both pass directions one spectrum must be shifted in wavelength to match the other. Pass direction 2 is shifted onto pass direction 1. In the summed spectrum, any pixel that has contributions from only one pass direction is set to zero. The total flux (Stokes 1) is computed by the special mode processing phase of calfos and is stored in the .c3h dataset (see below) and so is more conveniently obtained from there. The .c2h file is a dataset with 2 x POLSCAN groups with the statistical error of the calibrated flux for both pass directions and each POLSCAN position. The flux errors are stored in exactly the same way as the wavelengths and fluxes. As for the calibrated flux dataset, this dataset differs from the statistical errors for non-polarimetric data and the errors cannot be simply combined. We suggest that the error on the Stokes 1 parameter computed by the polarimetry processing be used as the total flux error. The .cqh file is a dataset with 2 x POLSCAN groups with the data quality values for the calibrated fluxes. The organization is exactly the same as that of the calibrated fluxes dataset. Polarimetric Calibration E 249 The group organization of the .c0h, .c1h, .c2h, and .cqh files is shown in Table 14.9. Table 14.9: Group Organization of the Calibrated .c0h, .c1h, .c2h, and .cqh Files ------------------------------------------------------------------------------ Group Contents Depending on Calibration File 1 Polscan 1, Pass direction 1: wavelength, flux, error or data quality 2 Polscan 1, Pass direction 2: wavelength, flux, error or data quality 3 Polscan 2, Pass direction 1: wavelength, flux, error or data quality 4 Polscan 2, Pass direction 2: wavelength, flux, error or data quality . . . 31 Polscan 16, Pass direction 1: wavelength, flux, error or data quality 32 Polscan 16, Pass direction 2: wavelength, flux, error or data quality ------------------------------------------------------------------------------ The .c3h file is a dataset with 56 groups containing the reduced polarimetry data. The dataset is organized into four sets of 14 groups, where groups 1 through 14 contain the Stokes parameter and polarimetry data for pass direction 1, groups 15 through 28 for pass direction 2, groups 29 through 42 contain the merged data from both pass directions 1 and 2, and groups 43 through 56 contain the merged data corrected for interference and instrument orientation. The organization of the c 3 h file is shown in Table 14. 1 0. Note that the wavelengths corresponding to the first set of 14 groups are given by the wavelength array for the first pass direction (i.e., group 1 of the .c0h file), while for the second set of 14 groups (groups 15 through 28) the corresponding wavelengths are given by the wavelength array for the second pass direction (i.e., group 2 of the .c0h file). For the merged data in the third and fourth sets of 14 groups (groups 29 through 56), the corresponding wavelengths are given by the first pass direction. Table 14.10: Group Organization of the Calibrated .c3h File Group# Pass Group # Pass Group # Pass Group # Pass Direction 1 Direction 2 Direction 1&2 Direction 1&2 Contents corrected ------------------------------------------------------------------------------ 1 15 29 43 Stokes 1 2 16 30 44 Stokes Q 3 17 31 45 Stokes U 4 18 32 46 Stokes V 5 19 33 47 Stokes I error 6 20 34 48 Stokes Q error 7 21 35 49 Stokes U error 8 22 36 50 Stokes V error 9 23 37 51 Linear polarization 10 24 38 52 Circular polarization 11 25 39 53 Polarization position angle 12 26 40 54 Linear polarization error 13 27 41 55 Circular polarization error 14 28 42 56 Polarization position angle error ------------------------------------------------------------------------------ CHAPTER 15: FOS Error Sources In This Chapter... Photometric Inaccuracies Wavelength Calibration Errors Other Data Problems Along with other error sources, deviations of target positions from the nominal "best" position can affect the accuracy of both the photometric and wavelength calibration. These deviations can occur in two dimensions. In the case of a simultaneous deviation in both the x- and y-direction, both photometric and wavelength accuracies are affected. In the following sections, we separately examine the deviations along and perpendicular to the dispersion direction. Photometric Inaccuracies Below we describe briefly each of the sources of photometric errors: * Time-dependent variations in FOS sensitivity. * Target miscentering. * Flatfields. * Change in telescope focus. * Location of spectra. * Thermal breathing. * Jitter. * GIM. * Calibration system offsets. Time-Dependent Variations in FOS Sensitivity FOS sensitivities have occasionally displayed time-dependent variations. None of these variations have ever been accounted for in routine pipeline processing. However, all can be accounted for by re-processing the data with the new AIS_CORR flux calibration method and the most recent reference files and tables. From early 1991 through mid-1992 the FOS experienced a systematic decline in sensitivity for all gratings and detector combinations. The systematic degradation in sensitivity of the FOS until mid 1992 was about 10 percent per year for all gratings on the blue side, and approximately 5 percent (except for the G190H grating) per year for the red side. The degradation in the G190H and G270H grating and the red detector was -10 percent per year and was wavelength dependent. 1 The pre-COSTAR degradation leveled off between mid- 1992 and the end of 1993. In Figure 15.1 we show the time dependence of the pre-COSTAR FOS sensitivity in a typical FOS/BL grating. Pre-COSTAR sensitivity changes for a typical FOS/RD grating are shown in Figure 15.2 and for the FOS/RD G190H in Figure 15.4. Post-COSTAR sensitivity shows a dip relative to pre-COSTAR values between approximately 1500 and 2500 A. Except for FOS/RD G190H and G160L post-COSTAR sensitivity has shown no temporal variation. From February, 1994 through July, 1994 FOS/RD G190H sensitivity dropped by approximately 2 percent and then increased from 3 to 8 percent between July, 1994 and July, 1995. FOS/RD G160L showed similar changes shortward of 2200 A. Figures 15.4 and 15.7 show the time dependence of post-COSTAR FOS sensitivity for the FOS/RD GI 90H grating. Archived FOS observations should always be re-processed with the AIS_CORR flux calibration method using the most recent set of reference files and tables. Figure 15.1: Pre-COSTAR Typical Time Dependence on BLUE Side Figure 15.2: Pre-COSTAR Typical Time Dependence on RED Side Figure 15.3: Post-COSTAR Changes in Sensitivity Over Time Figure 15.4: Pre-COSTAR Time Dependence in G190H Grating on RED Side Figure 15.5: Corrections as a Function of Wavelength for H19 Target Miscentering Inaccurate centering of a target in the aperture also leads to photometric errors because of loss of signal. The flux from the source will be underestimated systematically. Miscentering is likely to be the dominant error affecting flux calibration for small aperture observations. One can estimate the photometric error due to miscentering of the target in the aperture from the information supplied in Figure 15.6 and Table 15.1. Figures 15.6 a and b show, for the circular 1" aperture, the post-COSTAR diminution of the transmitted flux from a point source versus the pointing error (miscentering). Table 15.1 gives the maximum pointing errors for different types of target acquisitions. Thus, for example, for observations which used a binary acquisition and were taken through the 1" circular aperture, < 3 percent of the flux will be lost, while for observations using a five stage peak-up acquisition but taken through the 0.3" aperture, about 3 to 5 percent of the flux will be lost for a point source. Analysis of several peak-up acquisition observations shows that the fall-off in the signal is gradual except when the target is within about 0.1" of the edge of the aperture. The light loss is ~50 percent when the target lies on the aperture edge. Figure 15.6: Post-COSTAR Transmitted Flux Versus Pointing Error for the Single 1.0" Aperture Table 15.1: Target Acquisition Pattern Pointing Accuracies and Overhead Pattern Search- Search- Step-size-X Step-size-Y Pointing Overhead Aperture Name size-X size-Y (arcsec) (arcsec) Accuracy (minutes) (arcsec) ------------------------------------------------------------------------------ 4.3 A 1 3 1.23 7 1.0 B1 6 2 0.61 0.61 0.43 12 0.5 C1 3 3 0.29 0.29 0.21 10 0.3 D1 5 5 0.17 0.17 0.12 17 D2 5 5 0.11 0.11 0.08 17 D3 5 5 0.052 0.052 0.04 17 E1 4 4 0.17 0.17 0.12 14 E2 4 4 0.11 0.11 0.08 14 E3 4 4 0.052 0.052 0.04 14 F1 3 3 0.17 0.17 0.12 10 F2 3 3 0.11 0.11 0.08 10 1.0-PAIR B2 6 2 0.61 0.61 0.43 12 0.5-PAIR C2 3 3 0.29 0.29 0.21 10 0.25-PAIR P1 5 5 0.17 0.17 0.12 17 P2 5 5 0.11 0.11 0.08 17 P3 5 5 0.052 0.052 0.04 17 P4 4 4 0.11 0.11 0.08 14 2.0-BAR BDI 1 11 0.052 0.03 11 0.7-BAR BD2 1 11 0.052 0.03 11 SLIT S 9 1 0.057 0.03 10 ACQ/BIN Z 0.12 9 RED (1-sigma) ACQ/BIN Z 0.08 9 BLUE (1-sigma) ------------------------------------------------------------------------------ Flatfield Correction FOS flatfields are prepared from precisely pointed (<=O.04" pointing accuracy), high S/N observations (S/N >= 200 per pixel, typically) of two relatively featureless spectrophotometric stars, BD+28D4211 and G191-B2B (also known as WD0501+527). All FOS flatfield reference files are actually unity-normalized inverse flatfields, that are applied as multiplicative operators in the calfos pipeline data reduction. The overall flatfield correction generally has an accuracy of better than 3 percent. However, flats for certain specific dispersers, apertures, and spectral regions can be somewhat poorer. Recent studies (Keyes, 2nd HST Calibration Workshop, 1995) have shown that substantial (5 to 15 percent) flatfield variation occurs on spatial size-scales as small as 0.2" for all detector and disperser combinations, so that precise flatfield correction is only possible for science targets acquired with the same high pointing accuracy used for calibration observations. So-called superflats provide the best quality flatfields. The superflat observational and analysis procedures are explained in detail in FOS Instrument Science Report 088. The FOS/RD G190H flatfields have displayed time dependence and substantial spatial dependence in both pre- and post-COSTAR periods. All pre-COSTAR flatfields are somewhat aperture dependent. However, individual aperture-specific flatfield observations were not made for many apertures in the pre-COSTAR period. Little post-COSTAR aperture dependence exists between individual single apertures or individual paired apertures. Due to the substantial photocathode granularity spatial variations, flatfield reference files derived from single aperture observations should never be used to correct paired aperture observations and vice versa. The following general recommendations summarize the guidelines for the applicability of FOS flatfield reference files. Pre-COSTAR FOS/BL: * For all FOS observations before January 1, 1992, with the SINGLE aperture and BLUE detector use the science verification (SV) 1.0" aperture flats. * For observations after January 1 1992 use the new 4.3" aperture flats (superflats). * The BLUE side data taken in a paired aperture during 1991 should be corrected with the new 4.3" aperture flats computed for the UPPER and LOWER aperture positions or should be left uncorrected. * In no circumstances should the SV 1.0- SINGLE aperture flats be used to correct data taken in a paired aperture. FOSIRD: * For the G190H, G270H, and G160L dispersers, the Cycle 3 superflats of Lindler et al. (FOS Instrument Science Report 134.), should be used for all observations obtained after August 7, 1993. * For G400H, G570H, G650L, and the PRISM, the Cycle 3 superflats of Lindler et al. should be used for all observations obtained after 18 June, 1992. * For all other FOS/RD observations with the above dispersers, the FOS/RD flatfield obtained closest to the observation date should be used. * No SINGLE aperture flats should be used to correct data obtained with a paired aperture. * For G780H, no pre-COSTAR flatfields exist-unity correction is applied in the pipeline. * For all barred apertures, no pre-COSTAR flatfields exist--unity correction is applied in the pipeline. The FOS/RD flats (especially G190H, G160L, and to a lesser extent G270H) show significant wavelength structure (Figure 15.7). These flats also showed strong (> 10 percent) wavelength-dependent temporal variations during the first year of HST operation when flatfields were not routinely monitored. Between January 1992 and June 1992, the G190H, G160L, and G270H FOS/RD flatfields were monitored monthly. During this intensive monitoring period the flats varied by less than 2 percent. For the duration of the pre-COSTAR period, and indeed post-servicing, these flats have been monitored about every 3 months. Several substantial ( >= 5 percent ) new features appeared between June and November 1992. For the remainder of the pre-COSTAR era (ending December 1993) changes were less than 2 percent. Figure 15.7: Red Gl 90H Flaff ield Reference Files The StarView Calibration screens will almost always return the current recommended best flatfield reference file to use on your dataset. Post-COSTAR For both detectors: * For all single aperture observations apply 4.3" aperture superflat-derived flatfield with appropriate USEAFTER date. * All Cycle 4 SINGLE aperture high-dispersion observations obtained prior to July 13, 1994 should be recalibrated with current reference files as pre-COSTAR flatfields were used for these combinations prior to this date. * All Cycle 4 SINGLE aperture low-dispersion observations obtained before July 1, 1995 should be re-calibrated with the current reference files as pre-COSTAR flatfields may have been used for these combinations prior to this date. * All Cycle 4 PAIR aperture observations should be re-calibrated with new reference files to be delivered fall, 1995 (please check the FOS world wide web page, especially the "Documentation" section, for updates). Unity flats (no correction) have been applied to all post-COSTAR paired observations through September, 1995. * SINGLE aperture flats for apertures smaller than 4.3" also will be available in fall, 1995. Nonetheless, little aperture dependence is seen in post-COSTAR single aperture flatfields. Please consult the FOS Reference File Reference Guide on STEIS for availability and applicability. * Again, as for pre-COSTAR data, no SINGLE aperture flats should be used to correct data obtained with a paired aperture. * For all barred apertures, no post-COSTAR flatfields exist - unity correction is applied in the pipeline. A very strong photocathode blemish exists in FOS/BL G160L SINGLE aperture spectra. The contribution of the feature is extremely sensitive to target centering, such that large (often > 20 percent) uncertainties exist in the flatfield calibration of the spectral region between 1500 and 1560 A. Modest post-COSTAR time-variation of FOS/RD G190H flatfields has been observed. Between November 1994 and February 1995, 2 to 4 percent changes occurred in the same spectral regions that were active in the pre-COSTAR era. In order to assess the impact of the flatfield used in the data reduction process for any particular FOS observations, the target spectrum should be compared with standard star spectra and with similar science observations taken as nearly contemporaneously as possible. Further, the flatfield used in the calibration of the target spectrum should be compared with any other available flats for that instrumental configuration. 4 Figures 15.8 and 15.9 show representative flatfields for high dispersion gratings for both FOS detectors. Figure 15.8: Typical Flatfield Reference Files for Blue (FOS/BL) High Dispersion Gratings Figure 15.9: Typical Flat Field Reference Files for Red (FOS/RD) High Dispersion Gratings Change in Telescope Focus Systematic variations of pre-COSTAR FOS sensitivity occurred because OTA focus adjustments did not occur with sufficient frequency to keep up with the shrinkage of the graphite epoxy structure that is caused by outgassing on orbit. A change in focus by 15 microns led to photometric changes of up to 8 percent in the 4.3" aperture. The photometric variations also depend on aperture and very slightly on wavelength. All such variations are accounted for in the new calibration method (APR_CORR, AIS_CORR, and TIM_CORR). Focus changes have been monitored very closely since the installation of COSTAR, and no significant focus-related sensitivity variations have occurred. Location of Spectra (Y-bases) The ability to acquire FOS spectra depends on knowledge of where the spectra lie on the photocathode because electrons from a region of the photocathode the size of the diode array are deflected onto the diode array without magnification. Pre-COSTAR calibration data to determine the y location (perpendicular to the dispersion direction) of the spectra on the photocathode have shown that there is a trend with time in the y location of spectra for all gratings on the blue side. These trends are not seen on the red side. Furthen-nore, the shapes of the spectra on the photocathode are not linear, but have a curvature of +/- 20 y-base units because of small distortions in the magnetic fields of the Digicon detectors. Recent analyses of post-COSTAR data show that the trend of a spatial drift with time in the positions of the spectra for all gratings observed with the blue (FOS/BL) detector continues, while the locations of the spectra for all gratings and the red (FOS/RD) detector are still scattered randomly. The amount of scatter in the mean y-base location has increased since on-board GIM correction started and the routine DEPERM (clearing of the ambient magnetic field in the detector) was turned off. A test has been scheduled to check the effect of DEPERM on the scatter in the YBASES. The uncertainties in the locations of the spectra have affected both ACQ/BIN pointing accuracy and FOS photometric accuracy. The size of these uncertainties with ACQ/BIN forced us to follow ACQ/BIN with a time-consuming ACQ/PEAK to improve the target centering for science with all the small (smaller than 1.0") apertures. The photometric quality of the data, especially for FOS/RD and the 1.0" aperture, are compromised due to the large fluctuations in the location of the spectrum. Further, this is not a simple matter of losing light, but the effect is also wavelength dependent. On average, assuming that the locations of the spectra are known to an accuracy of only 20 YBASE units, for the large apertures (>= 1.0") 3 percent of the light of point sources and up to 20 percent for extended objects can be lost! Thermal Breathing There is a change in focus due to the thermal breathing of the secondary mirror support structures. The change in temperature as the spacecraft crosses the terminator affects the support structure and moves the secondary mirror, which in turn changes the focus. This effect occurs on timescales equal to the orbital period of the spacecraft. The pre-COSTAR photometric error associated with thermal breathing is < 4 percent and affects the flux in a random and uncorrectable way. The post-COSTAR effect is reduced substantially due to the much narrower PSF and typically affects only the 0.3" and smaller apertures. Only in very long (> 3000 seconds) RAPID mode observations can one see the periodicity (and possibly correct for it). Jitter Jitter is mostly due to the thermal instability of the solar panels. The greatest excursions occur when the spacecraft crosses the terminator, lasting for a few minutes. The jitter causes the telescope to mispoint, moving the target in the aperture. This problem has the largest effect on the small apertures (0.3" and smaller), because the target can move out of the aperture for a short period. The associated photometric errors cannot be accurately determined because the y-location of the spectra is unknown. The photometric error is rarely more than 1 percent, but always less than 3 percent. Geomagnetically Induced Image Motion Off-line geomagnetically-induced image motion (GIM) correction is needed only for data taken before April 5, 1993. For spectra taken later, the GIM correction is applied onboard the spacecraft. An FOS observation requires the electrons from the photocathode to be magnetically deflected onto the diode array. Due to insufficient magnetic shielding of the Digicon detectors, the earth's magnetic field affects where the electrons fall on the diode array. The effective magnetic field experienced by the electrons depends on the location of the spacecraft in the earth's magnetic field. The shift of the spectrum due to the changes in the effective magnetic field is both in the dispersion direction (x) and perpendicular to the dispersion direction (y). As of April 5, 1993, this geomagnetically-induced image motion (GIM) problem has been corrected for in real-time aboard the spacecraft through the application of a spacecraft position-dependent correction to the magnetic deflection to compensate (in both x and y) for the effects of the earth's magnetic field. However, before April 5, 1993, there was no real-time correction for GIM. The effect of the x-shift in the photoelectrons' impact point is to effectively shift the spectrum in the dispersion direction as a function of time. This displacement can be seen in data taken before April 5, 1993, by plotting the individual groups of raw data on a single plot (use the STSDAS grspec task) and noting the shift (in x) of the centroids of individual emission or absorption lines. The calfos task corrects for the x-shift caused by the GIM (as long as the OFF-CORR switch is set to "PERFORM") in the creation of the calibrated spectral data (.c0h and .c1h files). The GIM correction for a given group in an observation is determined from the orbital position of the spacecraft at the mid-point of the observation time for each group. To avoid resampling the data, and hence losing error information, the correction is applied as an integral pixel shift, although the accuracy of the correction is +/- 0.5 pixel where each pixel is 1/4 diode in the standard spectrophotometry modes. However, there is no way to correct for the photometric effects of the shift in y introduced by GIM. The y-shift effectively causes the point spread function (PSF) from the target to move on the diode array, leading to the loss of light off the edge of the array. This creates a time-dependent error in the flux, which is most severe for poorly-centered observations. For well centered observations through the 4.3- aperture the error will be < 5 percent. Some data taken after April 5, 1993 may not have had the on-board GIM correction applied. The header keyword YFGIMPEN will tell you if the onboard correction was enabled; if the value is "TRUE" then the onboard correction was applied. The onboard GIM correction is applied on a finer grid than is provided by the pipeline GIM correction in both the x and y axes. In the x direction the onboard GIM correction is applied in units of 1/32 of the width of the diodes, while in the y direction the unit is 1/32 of the diode height. The onboard GIM correction is calculated and updated every 30 seconds. Absolute Photometric Calibration System Offsets All pipeline-processed (FLX_CORR) pre-COSTAR FOS observations have been placed on the absolute photometric system of Bohlin et al. (1990). All post-COSTAR pipeline-processed observations prior to September, 1995 (FLX_CORR) are on an early version of the white dwarf flux scale based upon G191-B2B. AIS_CORR pre- and post-COSTAR observation processing produces fluxes on a slightly revised version of the white dwarf flux scale, based upon eight standard starslo. The pre-COSTAR FLX CORR system differs from the AIS_CORR scale by < 15 percent for wavelengths < 2000 A, ~5 percent for the wavelength range 2000-3500 A and < 3 percent for wavelengths > 3500 A (see Table 15.2). An illustrative comparison of the pre-COSTAR FLX_CORR flux scale and the white dwarf flux system is shown in Figure 15.10. The difference between the post-COSTAR FLX_CORR system and the recommended AIS_CORR system is typically < 3 percent for the 4.3" and 1.0" apertures and high dispersion gratings, and exceeds 10 percent only in small regions for the low dispersion modes. Table 15.2: Pre and Post-COSTAR Calibration System Differences Wavelength percent Uncertainty --------------------------------------- Far UV ~10 percent Near UV ~5 percent Visible ~3 percent --------------------------------------- Figure 15.10: Pre-COSTAR Pipeline Ratio of HST to White Dwarf Flux Scales Wavelength Calibration Errors Of the above calibration error sources the following four affect most severely the accuracy of the wavelength scale: * Filter-grating wheel (FGW) non-repeatability. * Aperture wheel non-repeatability. * Residual uncertainty of the magnetic field after GIM correction. * Target mis-centering. Filter-Grating Wheel Non-Repeatability Although the positions of the filter-grating wheel (FGW) in the beam of light are stabilized by notches, there is still some mechanical non-repeatability of the wheel position, the x-component of which influences the accuracy of the wavelength calibration. The amplitude of this effect has been measured recently based on post-COSTAR data (Koratkar & Martin 1995, 2nd HST Calibration Workshop). They find that the 1sigma non-repeatability is of the order of 0.1 diodes, with occasional deviations of up to 0.35 diodes. Aperture Wheel Non-Repeatability The non-repeatability of the aperture position introduces an additional uncertainty of the location of spectra along the dispersion direction on the same order of magnitude with a 1sigma uncertainty of about 0.1 diodes." Since most observations are taken with only one aperture, this effect often causes an offset for the entire sequence of measurements within one telescope visit on the target(s). Residual Uncertainty of Magnetic Field after GIM Correction We have noted above that the GIM correction cannot be applied in the y-direction. In the x-direction, i.e., in the dispersion direction, it minimizes the effect of the earth's magnetic field on the internal magnetic field. However, a small residual uncertainty is left, which amounts to about 0.03 diodes (1sigma) for FOS/RD. On the blue side, this test has been executed twice, first during SMOV (February 1994), and again in June 1995. However, the results are not yet available. It is known from experience with earlier data that the shielding of the blue detector is better than that of the red side, and the residual uncertainties after correction will correspondingly also be lower. Target Miscentering Target miscentering can also lead to photometric inaccuracies due to flux losses when the target is located close to the edge of the aperture. The highest errors can occur when observing with the smallest (0.3" or smaller) apertures. With the 0.3" aperture a target miscentering of 0.12" (as routinely achieved as target centering accuracy with the binary acquisition mode) leads to a flux loss of about 60 percent with respect to perfect centering. The expected flux loss for a pointing accuracy of 0.04" (as reached with a 4-stage peak-up sequence) leads to flux losses of less than 4percent in the 0.3" aperture. Observing with the 1.0" aperture, the same pointing accuracy of 0.04" leads to no measurable flux losses. For this aperture, the pointing accuracy of the binary target acquisition technique is sufficient. It will lead to flux losses of less than 3 percent Other Data Problems In this section we describe how to recognize and correct major problems that might affect your pipeline processed FOS data. The problems addressed here are: * Effect of an incorrect dead diode reference file. * Effect of a noisy diode. * Effect of an incorrect flatfield reference file. * Under-subtraction of background light, * Scattered light. Effect of Incorrect Dead Diode Reference File During pipeline processing, calfos uses the dead diode reference file (the list of all disabled diodes) to determine how many diodes contributed towards the counts for each pixel. This information is needed to calculate the exposure time per pixel and convert counts to count rates (see page 237). If an incorrect dead diode reference file is used, calfos does not have an accurate reporting of the diodes that were used for the onboard integration. This leads to serious errors in the count rates and fluxes for affected pixels. The effect of incorrect dead diode correction (see Figure 15.1 1) has a very distinct signature, which looks like an absorption or emission feature with sharp edges, extending over a fixed number (NXSTEPS x OVERSCAN) of pixels (usually 20). Further, the dead diode absorption feature typically does not go to zero counts because more than one diode contributes towards the counts in a given pixel. Thus the depth of the absorption feature for a pixel affected by a single missed disabled diode is 1 - [(OVERSCAN - 1) / (OVERSCAN)], or usually 20 percent. In Figure 15.11, panel (a) shows the raw counts and the dead diodes labeled 1-20. Panel (b) shows the count rate data from the pipeline processing from the .c4h file. Some of the dead diodes are correctly removed in the pipeline calibration while others are not. This occurred because an incorrect dead diode reference file was used in the processing of the data. Panel (c) shows the .c4h file after the correct dead diode reference file was used in the calibration. If you notice a feature in your data similar to the absorption feature described above, you should be suspicious that an incorrect dead diode reference file was used in the pipeline processing of your data. You can use the Calibration Reference screens in StarView to determine whether a more appropriate set of calibration reference files (including the dead diode reference file) now exist to calibrate your data. If so, you can retrieve those files and recalibrate your data using calfos (see "Recalibrating FOS Data" on page 277). If there is no change in the recommended dead diode reference file for your data and you are still suspicious that your data are affected by a missed disabled diode, contact the STScI Help Desk (help@stsci.edu) for further assistance. Figure 15.11: Effect of Incorrect Dead Diode Correction Effect of Noisy Diode The effect of a noisy (or hot) diode is typically an emission feature extending over a fixed number (NXSTEPS x OVERSCAN) of pixels (typically 2O). Figure 15.12 shows an observation where pixels 400 to 420 are affected by a noisy diode. This effect cannot be removed by recalibrating the data; you can manually edit the data to cosmetically smooth over or blank out the affected pixels. IRAF or STSDAS tasks to do this include fixpix or splot in its etch-a-sketch mode. Effect of Incorrect Flatfield Reference File During pipeline calibration, calfos corrects for small scale (less than 10 diode) inhomogeneities in the sensitivity of the FOS by multiplying each spectrum by an inverse flatfield. Small scale sensitivity variations result from both small scale inhomogeneities in the photocathode and diode-to-diode sensitivity variations. If an incorrect, or inappropriate, flatfield reference file is used to flatfield the data, small emission-like or absorption-like features will appear in the spectrum corresponding to sensitivity variations that were introduced or left uncorrected by the flatfielding. Figure 15.12: Effect of Noisy Diodes Figure 15.13 shows an example of a spectrum that was flatfielded incorrectly. Panel (a) shows the count rate data in the .c5h file that results from using both correct (solid line) and incorrect (dotted line) flatfield files. Panel (b) shows a blowup of the region from 300 to 1300 pixels of the same data. Figure 15.13: Incorrectly Flat-Fielded Spectrum of the Red Gl 90H Grating High-precision (S/N > 30) spectroscopic measurements with the FOS require that the observations sample the same portion of the photocathode as do the flatfield calibration observations. This sampling repeatability is limited by the target acquisition accuracy (typically 0.04" for calibration observations) and filter-grating wheel positional repeatability (approximate 1sigma deviation of 0.04"). The photocathode granularity varies by 5 to 15 percent over distances as small as 0.2" (see Figure 15.14), and no extended or detailed mapping of the granularity is available in the direction perpendicular to dispersion. Paired aperture granularity is substantially different than that for the single apertures. The overall flatfield correction generally has an accuracy of better than 3 percent, features in certain specific dispersers, apertures, spectral regions, and times of observations may be somewhat poorer as changes may not have been adequately tracked by calibration observations. Table 15.3 provides a list of detector and disperser combinations for which occasional flatfield inaccuracies of > 5 percent are known. One particularly illustrative feature is the strong photocathode blemish that shows up in FOS/BL G160L spectra in the 1500-1560 A region for all SINGLE apertures. This feature displays a strong spatial dependence such that errors of greater than 20 percent or more are common after flatfielding. As Figure 15.15 illustrates clearly, this strong blemish is seen in the 4.3" and 1.0" SINGLE aperture spectra and, though slightly weaker, with the 0.3" aperture (which samples a photocathode location concentric with the 1.0" SINGLE), and is very prominent in the 1.0-PAIR-LOWER location (approximately 1.3" below the SINGLE aperture location). However, the feature is nearly absent in the 1.0-PAIR-UPPER spectrum! Due to this striking behavior, the FOS group now recommends that any science observations requiring quantitative analysis of features in the 1500-1560 A region (notably C IV) should be performed only with the UPPER paired apertures. Note that also the FOS/RD G190H flatfields have displayed time dependence and substantial spatial dependence in both the pre- and post-COSTAR periods. Please refer to "Flatfield Correction (FLT-CORR)" on page 240 for a more complete discussion of these changes. Figure 15.14 illustrates the photocathode granularity described above, while Figure 15.15 illustrates the photocathode blemish. Figure 15.14 reflects count rate observations for November 1994 FOS/RD G190H 1.0" aperture observations of the spectrophotometric standard star G191-B2B. From top to bottom, exposures are displaced by +0.20" from the aperture center, well-centered, and displaced by +0.20" from the aperture center. The vertical scale was shifted with no magnification for purposes of display clarity. Figure 15.15 reflects count rate observations for June 1994 FOS/BL G160L observations of the spectrophotometric standard star BD+28D4211. From top to bottom 4.3" aperture, 1.0-PAIR-UPPER, 1.0" single, 1.0-PAIR-LOWER, and 0.3" single apertures. Again, the vertical scale was shifted with no magnification for purposes of display clarity. Figure 15.14: Variations in Photocathode Granularity Figure 15.15: Photocathode Blemish at 1500-156o A Table 15.3: Grating and Detector Combinations with More than 5percent Inaccuracy FOS/BLUE FOS/RED ----------------------------- G130H G190H G190H G270H G27OH^a G160L G16OL^a PRISM ----------------------------- a. Only one pixel range in which 5percent deviation occurs. You should be particularly wary of unusual features in your data. All observers should retrieve the flatfield reference file used to calibrate their data from the HST Archive or STEIS (these files have been delivered routinely on data tapes since May, 1993). As a careful check, compare the flatfield reference file data with: * Your raw science data. * The raw count rate data used to produce the flat. * Any raw count rate data for any other standard stars taken as nearly contemporaneously as possible with your science data in order to assess suspicious features in your science data. An online reference guide is accessible from both the Advisories Section and the Documentation Section of the FOS world-wide web page on STEIS (http://www.stsci.edu/ftp/instrument_news/FOS/topfos.html). This reference guide offers the list of currently recommended flatfield reference files by USEAFTER date for all combinations of detector, disperser, and aperture. Separate guides are maintained for the pre-COSTAR and post-COSTAR periods. Under-Subtraction of Background Light The FOS is subject to two types of background effects caused by high energy particles: * Light generated by Cerenkov radiation as particles hit the faceplate. * The striking of the detector by the particles themselves leading to spurious counts. The default reference background file that is currently used in calfos corrects for the dark signal from Cerenkov light. The background reference files are shown in Figure 15.15; both the red (dotted line) and the blue (solid line) detector backgrounds are shown. This model was obtained during Science Verification (see FOS Instrument Science Reports 071, 076, 079, and 080) For typical observations, which are obtained with no simultaneous dark data, the background reference file is appropriately scaled to account for the location of the spacecraft in the earth's magnetic field. The scaled background file, which is essentially an estimate of the dark current, is written to the .c7h file (in units of counts per second), and subtracted in the pipeline calibration. It is known that this background has a positional dependence: over the South Atlantic Anomaly (SAA), the background count rate is two orders of magnitude higher than elsewhere in the orbit. Thus, no FOS observations are carried out in the SAA. Figure 15.16: Background Reference Files: Dotted Line is Red Side, Solid Line is Blue Side The geomagnetic model used to scale the reference background file in the calfos pipeline underestimates the background counts by approximately 12 percent at low geomagnetic latitudes (< 20 degrees) and by about 20-30 percent at high geomagnetic latitudes. This error is insignificant in the case of strong sources (you can verify this by comparing the counts in the .c5h and .c7h files), but will cause substantial errors in the derived flux and spectral shape of weaker sources. A V ~19 magnitude star with an electron temperature of 10000 K, for example, will have the same count rate as the dark count rate for the FOS/RD detector. A new background reference file using a more sophisticated charged particle background and geomagnetic field model is currently being developed. For this purpose, all REJLIM=0 IMAGE mode darks from Cycles 1 through 4 were analyzed and relations between the dark count rate and any other variables-specifically, geomagnetic position, solar angle, and time, were investigated. There is little or no correlation between most variables and the dark count rate, except the geo-magnetic latitude. The overall dark count rates are 0.0109 +/- 0.0022 counts/second/diode (FOS/RD) and 0.0064 +/- 0.00083 counts/second/diode (FOS/BL). These numbers are comparable to those quoted in previous versions of the FOS Instrument Handbook. In our preliminary analysis of the geomagnetic latitude dependence of the background count rates we use Singular Value Decomposition (SVD) to make a "best fit" in a least-squares sense, to a polynomial and the data. As we have evidence that the count rate increases with absolute value of the geomagnetic latitude, we use a quadratic as the fitting function. FOS/RD: Rate_Red = 8.8 x 10^-3 - 8.0 x 10^-6alpha_gm + 5.2 x 10^-6alpha^2_gm FOSIBL: RateBlue = 4.3 x 10^-3 + 3.0 x 10^-5alpha_gm x + 3.0 x 10^-6alpha^2_gm For observations that have simultaneously-obtained background data, these data are used for background subtraction; and the error due to background subtraction would therefore be the error in the background data. Scattered Light Most scattered light in the FOS is caused by scattering off the gratings and apertures. Pre-flight data taken in the laboratory show that the scattered component increases with increasing wavelength. The G130H, G190H, G270H, G160L, and PRISM spectra (below 2500 A) are substantially affected by scattered light. A comparison between spectra taken with the solar blind (scatter-free) GHRS and with the FOS shows that the scattered light component dominates the count rate for FOS ultraviolet observations of late type stars (e.g., see Figure 15.17). Thus, scattered light is a major problem for red objects being observed with the short wavelength gratings, where the scattered light photons can dominate the blue photons dispersed by the gratings. For blue objects, the effect of scattered light is less significant. 13 A model of the scattered light in the FOS was developed by M. Rosa ; this model allows a more detailed description of the contribution of scattered light to the observed spectrum depending on the source spectrum (see "Scattered Light Correction (SCT_CORR)" on page 240). This tool, bspec, will be made available in STSDAS shortly. A comparison of GHRS and FOS spectra (Figure 15.17) and of the bspec model predictions with the measurements will then also allow a more accurate assessment of the inaccuracies of the current scattered light correction in the pipeline. Figure 15.17: Scattered Light Comparison of GHRS and FOS ------------------------------------------------------------------------------ CHAPTER 16: Recalibrating FOS Data In This Chapter... Finding Reference Files and Calibration Information Recalibrating FOS Data Accuracies This chapter explains the recalibration process and related information, such as finding the most recent information about calibration reference files and instrument changes. Finding Reference Files and Calibration Information If you need to recalibrate your data, the most important information you will need is the names and locations of the appropriate reference files. The relevant file extensions were listed in Table 7.5 on page 222. The location of all available reference files is given in Table 16.1. Table 16.1: STEIS Calibration Listings for FOS STEIS Directory and File Description ------------------------------------------------------------------------------ /instrument_news/fos/ Listing of flatfield reference files flat_field_tables_apr93.ps, flat_field_tables_apr93.asc /instrument_news/fos/ Listing of inverse sensitivity reference files ivs_tables_jun93.ps, ivs_tables_jun93.asc /instrument_news/ Bibliography of FOS instrument science bibliographies/fos_bib reports ------------------------------------------------------------------------------ In general, the most up-do-date information on HST can be retrieved from the STScI world wide web pages, using the URL: http://www.stsci.edu The FOS instrument team maintains an area under the FOS web page (URL http://www.stsci.edu/ftp/instrument_news/FOS/topfos.html) in which you can look up different kinds of documentation, for example the location of the most up-to-date reference files and the most recent Instrument Science Reports. Another valuable source of general information on the FOS is the current version of the FOS Instrument Handbook (6.0). More articles on the FOS can be found in the proceedings of both HST calibration workshops (Blades and Osmer (eds.) 1993; Koratkar and Leitherer (eds.) 1995). If you cannot find the solution of your problem in this documentation, you can ask for advice via e-mail to help@stsci.edu. The FOS Instrument Handbook also lists the names, e-mail addresses, and phone numbers of the FOS Instrument Scientists and Data Analysts, who might be able to assist you. Many problems occur repeatedly and some questions pertain to so many practical cases that they are asked very frequently. Therefore, we have compiled both a list of frequently asked questions (FAQ), which is available on one of our web pages, as well as several short "cookbooks" describing standard procedures to handle certain situations. These cookbooks are intended to give specific advice on special data reduction issues: they provide relatively detailed answers to frequently asked questions. They are not intended as formal publications, like the FOS Instrument Science Reports. A list of available cookbooks is provided on page 301. Recalibrating FOS Data The IRAF/STSDAS task used for both calibrating or recalibrating FOS spectra is calfos. The task sequentially performs each step described in "Overview of the FOS Pipeline Process" on page 244. To recalibrate FOS data using updated calibration files, you need to edit the header of the original science data, .d0h, (using the task hedit) and replace the names of the original calibration files with those of the new ones. The first thing to do is make sure that your data are flux calibrated with the new AIS method. The STSDAS task addnewkeys will update the headers of your .d0h files accordingly. In order to enter into the headers the most recent calibration and reference files and tables, use the task getreffile. You can pipe the output directly into the routine upreffile, which will update the file headers: getreffile @inputfile.list | upreffile After doing this, check the file headers carefully and make sure that all calibration switches are set properly. Once this is done you can rerun calfos. The getreffile task is available only at STSCI because it uses the Calibration Database (CDBS) which is not part of STSDAS. Alternatively, the files can be retrieved from the HST Archive. Figure 16.1: Partial Post-COSTAR FOS Header calfos will create new calibrated output files with the same extensions as the originally delivered data, namely .cnh and .cnd (n=O ... 8). More details on updating file headers are available in a cookbook available from the Help Desk. You should always compare the final calibrated data with the various reference files used in the data reduction process to make sure that spurious features were not introduced through improper data handling in the pipeline or your recalibration. If you use the proper reference files, you can expect to achieve the following accuracies for FOS spectra. Accuracies In this section, we summarize what is known about the accuracies of calibrated FOS data. We point out any known systematic effects in the determination of the photometric scale which may affect the flux calibration of your FOS data. Described here are: * Wavelength accuracies. * Photometric accuracies. * Polarimetric accuracy. Wavelength Accuracy The vacuum wavelength scale is computed during the pipeline processing, and the derived wavelengths for each dataset are stored in the .c0h file. Internal wavelength calibration lamps are used to determine the dispersion coefficients corresponding to each disperser and detector combination. The rms errors in the dispersion relations range between 0.01-0.08 diodes. This, together with a non-linearity of the diode array of about 0.02 diodes (rms), is the physical hard limit for the achievable accuracy of the wavelength calibration for the FOS. Even if the target is well centered in the aperture, the highest possible accuracy can be achieved only if wavelength calibration spectra are taken together with the science data. If you choose to only use wavelength data from the Archive, the calibration accuracy is determined by non-repeatabilities of the filter grating wheel (FGW)-and the aperture wheel, if it should be moved during the observations. The FGW non-repeatability introduces an error of ~0.1 diodes. This corresponds to ~60 km s^-1 for the high dispersion gratings. On average no large temporal shift has been observed for the wavelength calibration for any disperser and detector combination. Residual inaccuracies in the magnetic deflection after GIM correction can also occasionally lead to wavelength calibration non-repeatabilities of about 0.1 diodes. Since the light path of an external source is offset slightly from that of the internal calibration lamps, a correction for this offset has to be taken into account in the absolute wavelength calibration. Observations of a radial velocity standard source are then used to determine the zero point (internal-to-external offsets) of the wavelength scale. The internal-to- external offsets of the FOS wavelength calibration are 0.102 +/- 0.1 diodes for the blue side and 0.176 +/- 0.105 diodes for the red side. Hence, including the non-repeatabilities mentioned above, the accuracy with which the zero point of the wavelength scale is known in an individual spectrum is <= 0.25 diodes. A new wavelength calibration, based on more than 10 post-COSTAR observing epochs, is currently being developed. Preliminary results show no systematic change due to the deployment of COSTAR. However, based on the larger number of observations compared to the old pre-COSTAR wavelength calibration (Kriss et al., 1990), outliers will be identified and thus a higher accuracy will be reached with the new reference tables. Overlap Region of Adjoining Gratings The wavelength calibration uncertainty does not exceed more than 0.5 diodes in the overlap regions of the various gratings. This uncertainty is due to the lack of an arc comparison line in the overlap region of the gratings, where the wavelength solution is an extrapolation. Note that an error in the wavelength calibration introduces a small (< 1 percent) error in the flux calibration. Photometric Accuracy Reminder: All FOS observations should always be re-processed with the most current AIS_CORR reference files and tables. The FOS flux calibration is obtained from carefully centered (<= O.04" pointing accuracy) and flatfielded observations of spectrophotometric standard stars. Five standard stars, G191-B2B, BD+28D4211, BD+33D2642, BD+74D325, and HZ44, were used for the pre-COSTAR calibration observations. For the post-COSTAR calibrations, this set and three more white dwarf standards, GD71, GD153, and HZ43 are used. All pre-COSTAR observations were made with the 4.3" aperture and post-COSTAR observations are made with either the 4.3" or the 1.0" aperture. Typical S/N of binned spectral regions for these observations are 100 or often substantially greater. Two methods of flux calibration have been used. All pre-COSTAR data reduced in the pipeline were reduced with the FLX_CORR method. One average set of sensitivity corrections was applied to all pre-COSTAR data regardless of when the data were taken or how far the telescope may have been from nominal focus. Since Fall 1994 (about one year after servicing) a second method of flux calibration has been available for pre-COSTAR Archive data-the so-called AIS_CORR method, which includes corrections for time-dependent sensitivity variations and includes corrections for the actual focus history of the telescope. All post-COSTAR observations prior to fall of 1995 have also been calibrated in the pipeline with the FLX_CORR method. Beginning in the fall of 1995 the AIS_CORR method will be the default method for pipeline calibration. The FLX_CORR IVS corrections for 4.3" and 1.0" apertures were derived from post-COSTAR observation, but FLX_CORR IVS curves for all other apertures were calculated from theoretical aperture throughputs. All AIS_CORR corrections are based on actual post-COSTAR observations. Some small aperture FLX_CORR sensitivities may differ from AIS_CORR by up to 10 percent in selected ultraviolet spectral regions. The effects of focus and time-dependent sensitivity variation are much less severe for the post-COSTAR era, but improvement is always obtained with AIS_CORR. Pre-COSTAR FLX_CORR method (the standard pipeline reduced data) photometry is substantially inferior to fluxes derived from the new AIS_CORR method. Several important sources of photometric error could not be removed in the pipeline processing, but can be calibrated quite accurately by reprocessing with AIS_CORR. For example, pre-COSTAR FLX_CORR pipeline-calibrated fluxes may contain errors of up to 5 percent (FOS/RD) or up to 8 percent (FOS/BL) due to the pre-COSTAR sensitivity decline, of 5 percent due to telescope focus changes, and 3-15 percent (depending on spectral region) due to the offset between the IUE-based absolute flux system and the newer white dwarf model system. All of these error sources can be removed by re-calibration with AIS_CORR. Post-COSTAR FLX_CORR method photometry, which is on the white dwarf absolute system, typically differs from the recommended AIS_CORR re-processing by <3 percent for large apertures, but, as noted above, may be in error by up 10 percent for the smallest apertures and shortest wavelengths. The overall uncertainty of the AIS_CORR FOS flux calibration for point sources is <= 3 percent (Bohlin et al. 1995). Additional sources of photometric error that can not be removed with the improved AIS_CORR processing are listed in Table 16.2. Approximate levels of error introduced are given for both the pre- and post-COSTAR cases. The PODPS inverse sensitivity files used in the pipeline before March 1992 were prelaunch estimates and are incorrect by a factor of 2-3 because the spherical aberration and the real performance of the photocathode in space were not considered. Prior to March 1993, there were some uncalibrated apertures for which the inverse sensitivity files were prelaunch estimates. As of March 1993, new reference files with sensitivity set to 1 were installed for the uncalibrated apertures and any data calibrated with these files will remain in units of count rates and are not flux calibrated. In the post-COSTAR period, but before March 21, 1994, unity IVS files were used for all combinations of detector, disperser, and aperture-only three programs were affected. Unity IVS continue to be used for all barred apertures. AIS_CORR files will be ready for use in 1996. Before AIS_CORR, some post-COSTAR small aperture fluxes are in error by <10percent. Table 16.2: FOS Photometric Errors Not Removed by AIS_CORR Level of Error Source of Error Pre-COSTAR Post-COSTAR Comment ------------------------------------------------------------------------------ Miscentering target ~5 percent ~3 percent Depends on target in aperture acquisition technique and aperture used. Error can be easily estimated. y-location of 3-10 percent 1-10 percent Strongly affects spectra extended objects Thermal breathing ~4 percent ~1 percent Aperture dependent Jitter < 3 percent < 1 percent Aperture dependent GIMA^a < 5 percent < 1 percent ------------------------------------------------------------------------------ a. Pre-COSTAR GIM estimate refers to period before April 5, 1993. Polarimetric Accuracy For polarimetric observations the light beam in the FOS is split up right behind the entrance aperture by a Wollaston prism. One of the two resulting rays (the first and second pass, see "Polarimetric Calibration" on page 256), is rotated into one plane with the other by a waveplate so that both spectra are directed parallel to each other on the photocathode. These optical elements add only negligible errors to polarization data. The errors are dominated by the same errors as mentioned above for photometric measurements, i.e., filter-grating wheel non-repeatability, residual uncertainty of the magnetic field after GIM correction (which leads to uncertainties in the y location of the spectra), and aperture wheel non-repeatability. The two COSTAR mirrors add an instrumental polarization of <= 2 percent compared to pre-COSTAR data. After subtraction of this component, polarimetric accuracies of <= 1 percent can be achieved. For bright objects, the polarization angles are known to within about +/- 5 degrees. An FOS Instrument Science Report on this topic is currently in preparation. ------------------------------------------------------------------------------ CHAPTER 17: Specific FOS Calibration Issues In This Chapter... Effects of COSTAR on FOS Data Aperture Dilution Correction for Extended Sources RAPID Mode Observation Timing Uncertainties This chapter describes some specific issues relating to the FOS instrument and the calibration of its data. Effects of COSTAR on FOS Data The FOS is one of the HST instruments that are only moderately affected by the COSTAR deployment. Owing to the narrower point spread function (PSF) of the telescope, the throughput of a given aperture for point sources is higher now than in the past. For example, the throughputs of the 4.3", 1.0", and 0.3" apertures went up by factors of 1.3, 2.0, and 2.5, respectively. The narrower PSF more selectively illuminates fine-scale photocathode granularity rather than smoothing it out as was the case pre-COSTAR. As a result, very precise target acquisitions are needed to achieve high flatfielding accuracy. The narrower PSF also leads to a slightly narrower line spread function (LSF) for post-COSTAR data compared to the pre-COSTAR era. In addition, the size of the diodes projected on the sky has changed slightly because of the change in focal length of the instrument (1".29 post-COSTAR vs. 1".43 pre-COSTAR). Polarimetric measurements are affected, because additional optical elements (mirrors) were introduced into the light path, and this changes the characteristics of the incoming wavefronts. On the other hand, the wavelength calibration appears not to have been measurably affected. Thus, although some details in the FOS instrument characteristics have changed, the overall performance is not severely affected by COSTAR. Some of these changes have been discussed here, others still need to be quantified and have yet to be published-check our online documentation on the world wide web frequently; this is where the most recent results and publications will be listed. Aperture Dilution Correction for Extended Sources Flux calibrations are determined from observations of standard stars and compensate automatically only for any light in the PSF that falls outside the aperture. For observations of extended sources, a correction needs to be applied to the final flux calibrated spectrum to correct for the different illumination pattern. The correction is given by I = (F x A(ap) x T_4.3)/Omega where: * I = Specific surface intensity of a diffuse source in ergs s^-1 cm^-2 A^-1 arcsec^-2. * F = Flux in the calibrated spectrum. * A(ap) = Relative point source transmission through the aperture area when A(4.3) == 1. These are given in Table 17.1. * Omega = Solid angle of the aperture in square arcseconds, e.g., 4.3" x 1.4" (pre-COSTAR) and 3.7" x 1.3" (post-COSTAR) for the 4.3" aperture. * T_4.3 = Absolute transmission for a point source at zero focus of the 4.3" aperture. This number cannot be measured directly but is estimated to be ~0.73 (pre-COSTAR) and 0.95 (post-COSTAR). More details are provided in a cookbook available from the Help Desk (help@stsci.edu). See Instrument Science Reports 106 and 107. Table 17.1: Recommended Pre-COSTAR Aperture Corrections and Uncertainties at Nominal OTA Focus Grafing BLUE RED UNCA BLUE RED UNC BLUE RED UNC BLUE RED UNC Mode B3 (1") B1(0.5") B2 (0.3") C2-SLIT ------------------------------------------------------------------------------ HIGH 0.58 0.60 .02 0.41 0.44 .02 0.27 0.31 .03 0.39 0.41 .02 LOW 0.65 0.67 .06 0.46 0.50 .04 0.31 0.35 .03 0.43 0.4 .03 PRISM 0.53 0.54 .06 0.37 0.39 .04 0.26 0.30 .03 0.37 0.39 .03 ------------------------------------------------------------------------------ a. The uncertainties (UNC) do not include the possible contributions of pointing errors, OTA breathing, jitter, or Y-base errors in an arbitrary science observation. The deployment of COSTAR cause a narrower PSF, which led to considerably higher throughputs of the apertures to change considerably. For the large apertures (1.0" and larger) there is no measurable difference between FOS/RD and FOS/BL. Therefore, we list both detectors separately only for the 0.3" aperture and the slit in Table 17.2 The wavelength (grating), for which the throughputs are measured plays a more important role. Therefore, we list here ratios for the different dispersers separately. The numbers are taken from Bohlin and Colina (1995). FOS Instrument Science Report 136. Table 17.2: Post-COSTAR Average Aperture Throughput Ratios Relative to A-1 BLUE and RED BLUE and RED BLUE RED BLUE RED Grating B3 (1") B1 (0.5") B2 (0.3") C2-SLIT ------------------------------------------------------------------------------ G130H 0.875 0.730 0.640 0.615 G190H 0.900 0.810 0.720 0.720 0.715 0.715 G270H 0.920 0.870 0.780 0.790 0.745 0.770 G400H 0.950 0.890 0.800 0.830 0.780 0.800 G570H 0.960 0.890 0.840 0.810 G780H 0.960 0.895 0.780 0.820 G160L 0.895 0.790 0.700 0.720 0.675 0.720 G650L 0.955 0.900 0.840 0.795 PRISM 0.910 0.835 0.730 0.770 0.720 0.760 ------------------------------------------------------------------------------ Information on the data quality is stored in the .cqh, .c2h, and .c3h files. The .c2h file contains propagated statistical errors, assuming Poisson statistics, the .c3h file is a special statistics file that is produced for RAPID and PERIOD mode and for spectropolarimetric data. For RAPID and PERIOD data, the .c3h contains total or average fluxes for each frame and associated statistical errors. The data quality flags used for FOS data are compiled in Table 17.3. Table 17.3: FOS Data Quality Flag Values (Calibrated Data) Flag Description Value ------------------------------------------------------------------------------ Category 1: Data not useful Data values set to zero. 800 Data filled 700 Data filled due to GIM correction 400 Disabled channel 300 Severe saturation (uncertainty greater than 50percent) 200 Inverse sensitivity invalid (lambda < 1100 A or lambda > 7000 A) Category2: Data Uncertain. Uncertainty not Indicated in Error Calc. 190 Large saturation correction (uncertainty greater than 20 percent) 170 Intermittent noisy channel 160 Intermittent dead channel 130 Moderate saturation correction (uncertainty greater than 5 percent) 120 Sky or background fixed or extrapolated 100 Reed-Solomon decoding error Category 3: Data uncertain. Uncertainty in propagated errorfile 50 Sampling less than 50percent of nominal ------------------------------------------------------------------------------ Many other useful tasks (besides calfos) for handling FOS data can be found in the IRAF/STSDAS package; see Chapter 2 for more details. There you can find the names and brief descriptions of those tasks which are most important for spectroscopy. RAPID Mode Observation Timing Uncertainties Under certain very specific and rather complicated circumstances, the start times of individual exposures in an FOS RAPID mode time series must be calculated in a special manner. We recommend that RAPID mode observers contact the Help Desk (e-mail: help@stsci.edu) for help in determining precise start times of their exposures. ------------------------------------------------------------------------------ PART 5: Goddard High Resolution Spectrograph This chapter describes the Goddard High Resolution Spectrograph (GHRS) instrument, the calibration process, reference files and tables used in calibrating GHRS data, and some common problems and their solutions. To help illustrate the process of data analysis we use some case studies, our primary example being a program that obtained both images and spectra of several stars in R136a in the Large Magellanic Cloud. In addition to the information provided in this handbook, The GHRS Instrument Handbook provides information used to prepare GHRS observation proposals. It also includes details of the construction and operation of the instrument. This document generally does not repeat the instrument descriptions given in the GHRS Instrument Handbook, however, it does include much of the information that could be obtained from the GHRS Instrument Science Reports. Because the GHRS is expected to be removed from the HST as part of the 1997 servicing mission, version 6.0 will likely be the last GHRS Instrument Handbook, however, the information in this manual will continue to be maintained. Because the GHRS changes with time and our knowledge of it does as well, updates to software, reference files, calibration files, etc. are made from time to time. We urge you to check the GHRS web page or to consult the STScI Help Desk (help@stsci.edu) before proceeding if you are unsure of what files or software to use. The GHRS web page may be found at the following URL: http://www.stsci.edu/ftp/instrument_news/GHRS/topghrs.html With a little guidance, practice, and experience, the software for reducing GHRS observations (the STSDAS calhrs task) is fairly straightforward to use. Practical difficulties sometimes arise, but most questions have to do with estimating the uncertainties in the reduced quantities after the software has been run. Some of these uncertainties can be determined quantitatively, but others can only be estimated based on experience. We will speak of the dimensions of GHRS data in the form that you probably examine them: flux versus wavelength at various times. The dimensions are these: * Wavelength: The wavelength scale is established from measurements of the spectrum of a calibration lamp, either from an observation obtained with the astrophysical observations, or by using the default wavelength calibration. The wavelength scale and its accuracy can depend on the temperature of the GHRS, the geomagnetic latitude, and time. Corrections must also be made for the differing locations of the apertures used for observing. * Flux: The flux scale is determined by observing standard stars that have fluxes we believe we know and which are not thought to change with time. There are substantial uncertainties in establishing the absolute flux scale and in comparing other stars to it because of such things as the blaze function of the grating, continuum placement, "vignetting," and so on. Relative fluxes can be determined much more precisely, making it possible to compare, say, two stars at the same wavelength or the same star at the same wavelength at different times. For extended objects, the absolute throughput of the aperture used is also important. ------------------------------------------------------------------------------ CHAPTER 21: GHRS Instrument Overview In This Chapter.. Dispersers Detectors Internal Calibration Side 1 COSTAR The GHRS is one of the first generation science instruments aboard HST. The spectrometer was designed to achieve high spectral resolution, high photometric precision, and sensitivity throughput in the wavelength range 1100 to 3200 A. The instrument is a modified CzeRy-TuRNer spectrograph and has an assortment of components: two science apertures (large-LSA, and small-SSA), two detectors (D1 and D2), dispersers, and camera mirrors. There are also a wavelength calibration lamp, flatfield lamps, and mirrors to acquire and center objects in the observing apertures. A more in-depth description of the GHRS can be found in the GHRS Instrument Handbook. Schematics of the mechanical and optical layout can be found in the GHRS Instrument Handbook. 6.0, Figures 6-1 to 6-9. The GHRS was installed as one of the axial scientific instruments with the entrance aperture adjacent to FGS 2 and FGS 3. With the installation of COSTAR, the entrance apertures are at the former position of the HSR The GHRS has two science apertures, designated Large Science Aperture (LSA or 2.0) and Small Science Aperture (SSA or 0.25). The 2.0 and 0.25 designations are the pre-COSTAR size of the apertures in arcseconds. The LSA has a shutter to block light from entering the spectrograph, while the SSA is always open. Because of this, scattered light from a target in the SSA can contaminate a wavelength calibration exposure (wavecal). The locations of the GHRS apertures relative to the spacecraft axes are displayed in Figure 21.1. Figure 21.1: Locations of GHRS Apertures Relative to Spacecraft Axes Light from an astronomical target is collected by HST and focused on one of the two GHRS apertures. After passing through an aperture, the light strikes the collimating mirror and is directed toward the carousel. The collimated beam illuminates one of the gratings or a flat mirror. Selection of a grating or mirror is performed by rotating the carousel. Camera mirrors focus the dispersed light onto one of the Digicon photocathodes. Photoelectrons from the Digicons are focused onto a linear silicon diode array. Dispersers The dispersers are mounted on a rotating carousel, together with several plane mirrors used for acquisition. The first-order gratings are designated as G140L, G140M, G160M, G200M, and G270M, where "G" indicates a grating, the number indicates the blaze wavelength (in nm), and the "L" or "M" suffix denotes a "low" or "medium" resolution grating, respectively. The GHRS medium resolution first-order gratings are holographic in order to achieve very high efficiency within a limited wavelength region. G140L is a ruled grating. The first two first-order gratings, G140L and G140M, have their spectra imaged by mirror Cam-A onto detector D1, which is optimized for the shortest wavelengths (about 1050 to 1700 A.). The other three gratings have their spectra imaged by Cam-B onto detector D2, which works best at wavelengths from about 1700 to 3200 A, but which is also useful down to 1200 A. The useful spectral wavelength ranges of the first order gratings are listed in Table 21.1. Table 21.1: Useful Wavelength Ranges for First-Order Gratings Grating Useful Range (A) A per diode Bandpass(A) Comment ------------------------------------------------------------------------------ G140L 1100-1900 0.572-0.573 286-287 G140M 1100-1900 0.056-0.052 28-26 G160M 1150-2300 0.072-0.066 36-33 2nd order overlap above 2300 A G200M 1600-2300 0.081-0.075 41-38 2nd order overlap above 2300 A G270M 2000-3300 0.096-0.087 48-44 2nd order overlap above 3300 A ------------------------------------------------------------------------------ Table 21.1 summarizes the useful wavelength range for each of the first-order gratings of GHRS. Note that little or no flux below 1150 A is reflected by the COSTAR mirrors because of their magnesium fluoride coatings. The carousel also has an echelle grating. The higher orders are designated as mode Ech-A, and they are imaged onto D1 by the cross-disperser CD1. The lower orders are designated as mode Ech-B, and they are directed to D2 by CD2. The spectral wavelength range bandpass and sensitivity of the echelle gratings can be found in the GHRS Instrument Handbook, version 6.0, Table 8-3. Finally, mirrors N1 and A1 image the apertures onto detector D1, and mirrors N2 and A2 image onto D2. The "N" mirrors are "normal," i.e., unattenuated, while the "A" mirrors ("attenuated") reflect a smaller fraction of the light to the detectors, so as to enable the acquisition of bright stars. (The mode designated as NI actually uses the zero-order image produced by grating G140L). Bright targets are acquired with one of the attenuated mirrors, A1 or A2. These acquisitions will take substantially longer than if the N1 or N2 mirrors were used. SSA acquisitions (ACQ/PEAKUPs) with the A1 mirror are usually doubled up to center the target in the SSA. Acquiring a target in the SSA first requires a LSA acquisition (3x3 search) followed by a SSA acquisition (5x5 search). The GHRS LSA is 74 arcseconds from the FOS Blue aperture. Starting with Cycle 5, it became possible to acquire a faint target with the FOS and slew the telescope to position the target into the GHRS LSA. It also became possible to acquire a bright target with the GHRS and slew the telescope to position the target into the FOS Blue aperture. Therefore, a GHRS dataset obtained during Cycles 5 and 6 may contain FOS observations. Use of the various gratings or mirrors produces one of three kinds of GHRS data: 1) an image of the entrance aperture, which may be mapped to find and center the object of interest; 2) a single-order spectrum; or 3) a cross-dispersed, two-dimensional echelle spectrum. Detectors The flux in these images is measured by photon-counting Digicon detectors; the portion of the image plane that is mapped onto the Digicon is determined by magnetic deflection coils. The detectors are the heart of the GHRS and they involve subtleties that must be understood if data are to be reduced properly and competently. First, there are two Digicons: D1 and D2. D1 has a cesium iodide photocathode on a lithium fluoride window that makes D1 effectively solar-blind, i.e., the enormous flux of visible-light photons that dominate the spectrum of most stars will produce no signal with this detector, and only far-ultraviolet photons (1060 to 1800 A) produce electrons that are accelerated by the 23 kV field onto the diodes. D2 has a cesium telluride photocathode on a magnesium fluoride window. Each Digicon has 512 diodes that accumulate counts from accelerated electrons. 500 of those are science diodes, plus there are corner diodes and focus diodes. The 500 science diodes are 40 x 400 microns on 50 micron centers. The focus diodes are 25 x 25 microns and there are three located on each end of the array. Two 1,000 x 100 micron diodes are used to measure background and two 1,000 x 100 micron diodes are used to monitor high energy protons. Eight diodes map the LSA and the SSA is 1 diode wide and 1/8 diode high. Second, both photocathodes have granularity-irregularities in response--of about 0.5 percent (rms) that can limit the S/N achieved, and there are localized blemishes that produce irregularities of several percent. The Side 1 photocathode also exhibits "sleeking," which is slanted, scratch-like features that have an amplitude of 1 to 2 percent over regions as large as half the faceplate. The effects of these irregularities could in principle be removed by obtaining a flatfield measurement at every position on the photocathode, but that is, in general, impractical. Instead, the observing strategy is to rotate the carousel slightly between separate exposures and so use different portions of the photocathode. This procedure is called an FP-SPLIT, and with it each exposure is divided into two or four separate-but-equal parts, with the carousel moving the spectrum about 5.2 diode widths each time in the direction of dispersion. These individual spectra can be combined together during the reduction phase. Third, the diodes in the Digicons also have response irregularities, but these are very slight. The biggest effect is a systematic offset of about 1 percent in response of the odd-numbered diodes relative to the even-numbered ones. This effect can be almost entirely defeated by use of the default COMB addition procedure. COMB addition deflects the spectrum by an integral number of diodes between subexposures and has the additional benefit of working around dead diodes in the instrument that would otherwise leave image defects. Fourth, the Digicons' diodes are about the same width as the FWHM of the point spread function (PSF) for HST. Thus the true resolution of the spectrum cannot be realized unless it is adequately sampled. That is done by making the magnetic field move the spectrum by fractions of the width of a diode, by either half- or quarter-diode widths, and then storing those as separate spectra in the onboard memory. These are merged into a single spectrum in the data reduction phase. The manner in which this is done is specified by the STEP-PATT parameter, described in more detail later. The choice of STEP-PATT also determines how the background around the spectrum is measured. Internal Calibration The GHRS was built with two Pt-Ne hollow-cathode lamps to provide a rich spectrum of emission lines for accurate calibration of wavelengths. The locations of these lamps and the way in which they illuminate the spectrograph optics are illustrated in the GHRS Instrument Handbook. The apertures through which calibration exposures are made are the same size as the Small Size Aperture (SSA). Moreover, calibration aperture SC2, the one most frequently used, is offset from the SSA in the x direction (the direction of dispersion), but is aligned in the y direction (see Figure 21.1). This offset in x introduces a systematic shift in wavelength between the SSA and SC2 because light from them hits the gratings at different angles. By convention, the wavelength scale of the GHRS is calculated so as to be correct for the SSA. These wavelength calibration lamps are also used for other internal instrumental calibrations, such as DEFCALs (deflection calibration) and SPYBALs (Spectrum-Y-Balance). Only one of these lamps has been available for use ever since the failure of Side 1 in 1991. A wavelength calibration exposure ("wavecal") is obtained by specifying an ACCUM in the Phase II Proposal with a target of WAVE and an aperture of SC2. (Prior to 1991 SC1 was a valid aperture designation and so older observations may show it.) A wavecal is not the only way to assess the quality of the wavelength scale for observations. As will be noted in the example for R136a, a SPYBAL is obtained just before an ACCUM that uses a grating for the first time. A SPYBAL is just a wavecal that is obtained at a predetermined carousel position for each grating, and SPYBALs are done to align the spectrum on the diodes in the y direction (perpendicular to dispersion). A SPYBAL contains wavelength information that can be used to check for zero-point offset. Side 1 The installation of the GHRS Redundancy Kit during the first service mission (December 1993) eliminated the risk of a Side 1 power supply failure affecting Side 2. With this risk removed, Side 1 operations were resumed during Cycle 4. These operations have been nominal. When the Side 1 carousel is commanded through the Side 2 electronics during a Side 1 observation, the telemetry word containing the carousel position will read the Side 2 commanded position during carousel configuration, and the Side 1 encoder position after integration. The Side 1 GHRS data headers do contain the correct Side 1 position of the carousel. GHRS Side 1 observations cover the time spans April 1990 to June 1991 and February 1994 to the Second Servicing Mission. COSTAR The spherical aberration of the HST primary mirror was corrected by the first servicing mission (December 1993) with the installation of the Corrective Optics Space Telescope Axial Replacement (COSTAR) assembly. COSTAR deployed corrective reflecting optics in the optical path of the GHRS. The post-COSTAR GHRS has a different response with wavelength than the pre-COSTAR instrument. Interim sensitivity files were installed in CDBS on 16 April 1994, and updated during Cycle 4 and 5 when calibration observations became available. GHRS observations obtained early in Cycle 4 may require recalibration. See GHRS ISR 062, "First Measures of Sensitivity For the Post-COSTAR GHRS with Interim Values for Data Analysis", and ISR 071, "Stability of the GHRS Sensitivity During Cycle 4", for more details. ------------------------------------------------------------------------------ CHAPTER 22: GHRS Planned vs. Executed Observations In This Chapter... Case 1: R 1 36a in the LMC Case 2: RAPID Mode (and a little about Spatial Scans) In this chapter we will use several examples of actual GHRS programs to illustrate how Phase II Proposals get turned into observations and data products. We hope that these are useful to you in understanding your own data. Complete explanations are provided later for reference. Case 1: R136a in the LMC Program 5297 was executed in the spring of 1994 as an Early Release Observation (ERO), following the 1993 servicing mission. During that mission, COSTAR mirrors were deployed so that the full optical quality of HST could be realized by the GHRS. Also, a repair kit was installed that allowed Side 1 of the GHRS to be used once again (Side 1 capabilities were lost in 1991 due to the failure of a low-voltage power supply). Program 5297 was designed to take advantage of and demonstrate these capabilities by obtaining low-resolution spectra of closely-spaced hot stars in a young cluster (R136a) in the Large Magellanic Cloud (LMC). Two of the stars to be observed were only about 0.12 arcsec apart, less than the size of the GHRS Small Science Aperture (SSA), which is 0.22 arcsec square. Thus, precise positioning was needed. To achieve this, the telescope was first to acquire and center on the brightest object in the R136a field. That object was itself the convolution of two stars, so that some allowance had to be made for the apparent centroid of what was acquired. This was done using information from a WF/PC-1 image of R136a obtained before the servicing mission. Once centered on R136a1, the SSA could then be moved with high relative precision, but to ensure understanding of what was observed, IMAGEs were taken at each position. The Exposure Logsheet An abridged version of the Phase II Proposal for 5297 is shown in Figure 22.1. The first portion of the figure is the Target List and the bottom half is the Exposure Logsheet. Here are some notes to go with individual lines of the Exposure Logsheet: * Line 1: RI 36al is acquired with the Large Science Aperture (LSA). The LSA is denoted as "2.0" but has a true size of 1.74 arcsec square with the COSTAR mirrors in place. SEARCH-SIZE=5 is specified to ensure a good acquisition by providing a wider search area. (The default SEARCH-SIZE=3 is adequate in nearly all cases, but the crowding of this field lead to using a more liberal value.) The specified exposure time of 25 seconds implies a STEP-TIME of 1.0 second; STEP-TIME is the dwell per point in the spiral search pattern. MIRROR-N2 is used because these stars are faint. Using MIRROR-NL would be preferable for brighter objects because then it would be unnecessary to switch from using detector D2 on Side 2 (implied by MIRROR-N2) to detector DI on Side 1 (implied by using Spectral Element G140L on line 20). Switching from one Side of the GHRS to the other takes about 40 minutes because of the need to let the electronics stabilize. However, MIRROR-NI reflects light only from about 1200 to 1700 A, and the STEP-TIME needed to acquire these stars would exceed the maximum permitted value of 12.75 seconds. Note the comment explaining how the exposure time was estimated and the Special Requirements that tie this acquisition to the subsequent exposures. Acquisitions are explained in detail in the GHRS Instrument Handbook. The comment about BRIGHT=RETURN means that the onboard acquisition algorithm will determine which dwell point had the most counts and will center the LSA at that point. * Line 2: Here the object R136a1 is centered in the LSA. The exposure time of 102 seconds implies a STEP-TIME of 1.0 seconds, as for the initial acquisition. As we noted, what is called "RI36al" is not a single object but is a convolution of R136a1 with R136a2 because of the finite resolution of HST and the size of the LSA. An estimate of the centroid of a1+a2 was made from a pre-COSTAR WF/PC-1 image. * Line 3: Here R136a1 is centered in the SSA, with SEARCH-SIZE=5 and an implied STEP-TIME of 1.0 second. The true post-COSTAR size of the SSA is 0.22 arcsec square. As for line 1, the acquisition algorithm will auto matically determine which dwell point had the most counts and will move the SSA to that point. (The SSA ACQ/PEAKUP algorithm is the same as the LSA BRIGHT=RETURN acquisition algorithm.) * Line 4: An IMAGE of R136a1 in the SSA is obtained. The values of NX, NY, DELTA-X, and DELTA-Y were chosen to fully cover the SSA with the minimum spacing between points. The time spent at each point is 1.0 seconds, as implied by the 169 second total exposure time. * Line 5: This is the same as line 4 except that the object is R136a2. This implies that the telescope has been moved 0.052 arcsec north and 0.105 arcsec east, for a net motion of 0. 1 17 arcsec, about half the size of the SSA. An IMAGE is taken to confirm the positioning. * Line 20: This line obtains a ten-minute exposure on R136a2 with grating G140L centered at 1300 A. The STEP-PATT value chosen (3) is not the default value of 5, but was selected to maximize the time on target at the cost of some uncertainty in the background. Because the object is faint and low signal-to-noise is anticipated, FP-SPLIT=NO has been chosen. Using a Side 1 spectral element (G14OL) implies a wait of about 40 minutes between lines 5 and 20 to allow one detector to be turned off and the other to be turned on and brought to a stable configuration. * Line 21: Here a second spectrum centered at 1610 A is obtained. Lines 20 and 21 will provide as much spectrum as G140L is capable of delivering. * Line 50: Now the target is R136a5, implying a small motion of the telescope. The central wavelength is the same as for line 21 to eliminate a grating movement. A longer exposure of 30 minutes is needed for this fainter target. In this case the FP-SPLIT feature has been used. * Line 51: Same as line 20, but for R136a5. Again, FP-SPLIT is used. * Line 52: A wavelength calibration exposure obtained at the previous position for GI 40L. * Line 99: An IMAGE of R136a5, analogous to line 5. As before, the use of MIRROR-N2 implies waiting about 40 minutes to switch Sides. * Line 100: An IMAGE of R136a5 in the LSA with MIRROR-N2. * Line 101: This "acquisition" at the end of the sequence of operations may seem odd, but what it achieves is a coarse image over a broader area (SEARCH-SIZE=3) than an IMAGE itself is capable of. Each of these lines is an operation specified by the observer, and they correspond closely to the operations performed by the instrument and the data files that are generated, as shown in Table 22.1. For example, each of the ACQuisitions, ACQ/PEAKUPs, and IMAGEs results in a data file, but each is also preceded by a DEFCAL. This is a DEFlection CALibration, in which an internal lamp illuminates an aperture (SC2) which the acquisition mirror then images on the photocathode. Software on the spacecraft then determines where the actual image of the aperture is falling on the diodes so that the image can be properly centered. The image moves slightly due to thermal effects and the earth's magnetic field. These DEFCALs are essential for proper operation of the GHRS, but the information they contain is rarely of use to the observer. You will note that many, but not all, of the ACCUMs are preceded by a SPYBAL exposure. SPYBAL stands for SPectrum Y BALance, and it, like a DEFCAL, is performed to ensure proper alignment of the spectrum on the science diodes. However, a SPYBAL is an actual spectrum recorded at a selected wavelength for each grating (chosen to provide a uniform distribution of comparison lines) and it is performed each time a different grating is used for the first time. In longer programs SPYBALs will appear about every 90-minutes of alignment time as well, in order to compensate for thermal drifts and the like. An exposure will not be interrupted to insert a SPYBAL. In general, SPYBALs will be obtained at some wavelength that is different from the one at which you obtained your observations. However, the primary changes in wavelength occur in the zero point, not to the dispersion. As a result, a SPYBAL may be used to derive a correction to the default wavelengths that ends up being nearly as good as if a full wavelength calibration had been obtained. Figure 22.1: Exposure Logsheet for Program 5297 Table 22.1: Relationship Between Proposal Lines and Files for Program 5297 Exposure MIRROR Central ROOTNAME Logsheet OBSMODE APERTURE or TARGNAME FP-SPLIT Wavelength for output Line GRATING (A) file ------------------------------------------------------------------------------ 1 DEFCAL z2bd0101t ACQ LSA R136a1 z2bd00102t 2 DEFCAL z2bd0103t ACQ/PEAKUP LSA R136a1 z2bd0104t 3 DEFCAL z2bd0105t ACQ/PEAKUP SSA R136a1 z2bd0106t 4 DEFCAL z2bd0107t IMAGE SSA N2 R136a1 z2bd0108t 5 DEFCAL z2bd0109t IMAGE SSA N2 R136a2 z2bd010at 20 SPYBAL SC2 G140L WAVE NO 1414.899 z2bd010bt ACCUM SSA G140L R136a2 NO 1304.579 z2bd010ct 21 ACCUM SSA G140L R136a2 NO 1608.027 z2bd010dt 50 SPYBAL SC2 G140L WAVE NO 1414.903 z2bd010et ACCUM SSA G140L R136a5 FOUR 1598.548 z2bd010ft 51 SPYBAL SC2 G140L WAVE NO 1414.728 z2bd010gt ACCUM SSA G140L R136a5 FOUR 1294.922 z2bd010ht 52 ACCUM SC2 G140L WAVE NO 1313.492 z2bd010it 99 DEFCAL z2bd010jt IMAGE SSA N2 R136a5 z2bd010kt 100 DEFCAL z2bd010lt IMAGE SSA N2 R136a5 z2bd010mt 101 DEFCAL z2bd010nt ACQ LSA N2 R136a5 z2bd010ot ------------------------------------------------------------------------------ Table 22.2 shows some of the different kinds of files generated in this program for the different operations that resulted from the Exposure Logsheet. The entries are Exposure Logsheet line numbers. The lines at the bottom of the table show what kinds of files are generated for the different kinds of operations. For example, all observations generate log files with extensions .shh, .ulh, and .trl. The * next to the acquisition files for lines 1 and 3 indicate that these observations also generate .d1h files and the corresponding .q1h data quality files. .d1h files are generated for Retum-to-Brightest (RTB) acquisitions and SSA peakups (which use the RTB algorithm). Notice that LSA peakups do not generate .d1h files; nor would an LSA acquisition that does not use the RTB algorithm. Table 22.2: Breakdown of Files by Proposal Line Number for Program 5297 TARGET DEFCAL ACQ ACQIPEAKUP IMAGE (MAP) SPYBAL WAVECAL ACCUM FP-SPLIT ------------------------------------------------------------------------------ 1 1* | | R136a1 2 2 | | 3 3* | | 4 | 4 | 5 | 5 | R136a2 | | 20 20 | | 21 | | 50 50 | | 51 51 R136a5 | | 52 99 | 99 | 100 | 100 | 101 | 101 | ------------------------------------------------------------------------------ Header Log files: .shh,.ulh, | All log files, | All log files, all raw data File .trl | plus raw data | files, plus calibrated files Extensions | files: .d0h, | .c0h, .c1h, .c2h, .c3h, .c4h | .q0h, .x0h, .xqh| .c5h, .cqh ------------------------------------------------------------------------------ Examining the ACCUMs There are many, many data files for this program, but to start you would probably like to look at the reduced spectra, and then go back to understand how they got the way they are and how the calibrations might be improved. The reduced wavelength and flux files have extensions of .c0h and .c1h (for a complete explanation of file extensions, see Table 23.2 on page 348). GHRS observations have separate files for wavelengths because they are on a non-linear scale; i.e., there is a distinct wavelength associated with each output data point, and those wavelengths are not necessarily evenly spaced. For a quick look, we need to use fwplot, a task in stsdas. The fwplot task will display flux versus wavelength and it can find the right wavelength file automatically (unless you have changed its name!), so we use the following to examine the first group of a repeat observation containing RI 36a2: >fwplot z2bd010ct.c1h to get what is shown in Figure 22.2. Plots of the other spectra can be obtained by changing the rootname. fwplot can also plot error bars and the like, which we'll get to in a moment. Figure 22.2: Flux vs. Wavelength for Rl 36a2 Now that we have seen the spectrum, let's understand better why it looks the way it does. First let's look at counts versus pixel to get a sense of the quality of the observations. Since there are three repeats in this observation, let's look at all four of the corresponding groups of data. We must create a listing of the file names with the group numbers attached, so we use grlist and redirect the output to a file. Then we stack the spectra using sgraph: >grlist z2bd010ct.d0h - > lis >sgraph @lis st+ Note how few counts we have at the short-wavelength (left) end (Figure 22.4). We can see the effect of this in the reduced spectrum. If we use fwplot to plot the a small part of the reduced spectrum with its errors (Figure 22.3) we get essentially the same information in a more compact form, and we can see that the errors are the statistical ones calculated from raw counts. Figure 22.3: Reduced Spectrum of R136a2 with Error Bars Figure 22.4: Raw Counts vs. Diode Number for Four Spectra in z2bd010c. The reduced spectrum in Figure 22.3 has a wavelength scale that is the default from the pipeline reduction system. There are two ways to improve on this, either by using a wavelength calibration exposure or by using a SPYBAL. We can take the SPYBAL z2bd010bt and process it with waveoff. The wavecal exposure is shown in Figure 22.5, and the net result is that we compute a wavelength offset of 0.708A. See the help file for waveoff for examples of how to use it. Figure 22.5: SPYBAL Exposure z2bd010bt Putting FP-SPLITs Back Together Specifying FP-SPLIT=FOUR (or any value except "NO") on the exposure logsheet results in small motions of the grating carousel between individual subexposures. This moves the source spectrum slightly on the diode array, making it possible to distinguish fixed-pattern noise from true spectrum features and to correct for that pattern. The most sophisticated use of FP-SPLITs involves iteration to determine a function representing the fixed-pattern noise. This may be augmented by obtaining spectra obtained with larger discrete motions of the grating as well; see the discussion by Cardelli et al. 1993, Ap.J., 402, L17; Cardelli and Ebbets 1994, Calibrating Hubble Space Telescope, HST Calibration Workshop, ed. J.C. Blades and A.J. Osmer (Baltimore:STScI), 322. Here we describe the simplest use of FP-SPLITs for achieving more modest gains in signal-to-noise. This involves simply realigning the individual subexposures so that spectrum features line up, and then adding up the spectra. The best way to align the subexposures is by cross-correlating them against the first exposure obtained. When there is not enough signal-to-noise for that it is possible to determine the shift to apply from knowledge of instrument parameters, but the factors that limit the quality of the wavelength scale (see "Recalibrating GHRS Data" on page 369) make that method inferior to cross-correlation. The cross-correlations to restore FP-SPLITs are done in stsdas with the specalign task. An example of raw data produced by an FP-SPLIT is shown in Figure 22.6. Figure 22.6: Raw FP-SPLIT Observations for R136a5 Note that there are four groups of four, with a shift in wavelength after every fourth spectrum. The result of shifting and adding all the subexposures for R136a5 is shown in Figure 22.7 using sgraph to show that there are more pixels in the combined spectrum than in the output of the pipeline software. Figure 22.7: Summed Spectrum for R136a5 Case 2: RAPID Mode (and a little about Spatial Scans) RAPID mode is used relatively rarely, but it has some unique capabilities. There are also some unique problems that arise in treating the resultant data. Proposal 5745 was a calibration test designed to map out the central portion of the PSF as presented to the GHRS apertures by COSTAR. This particular test made use of a spatial scan to map out the PSF while collecting data in RAPID mode. The Exposure Logsheet An abridged version of the Phase II Proposal for 5745 is shown in Figure 22.8. Here are some notes to go with individual lines of the Exposure Logsheet: * Line 10: Sk-65degrees21 is acquired with an ONBOARD ACQ using MIR-ROR-N1. * Line 15: A confirmation IMAGE is taken as a sanity check for this program. In general, an IMAGE is unnecessary. * The geometry of the spatial scan is shown in Figure 22.9. The splot display shows the relative orientations of the GHRS x,y and HST V2-V3 coordinate systems. Data points are delta distances in units of arcseconds. * Line 20: Sk-65degrees21 is moved to the SSA using an ACQ/PEAKUP. The STEP-TIME was increased to compensate for the reduced throughput of the SSA relative to the LSA and to increase the signal-to-noise for the PEAKUP algorithm. * Line 25: Another ACQ/PEAKUP. Again this is an exception rather than the rule for SSA ACQ/PEAKUPs (except when using MIRROR-A1). * Line 40: This is the line where we get the spectra. We have specified a RAPID of 112.5 minute duration with a SAMPLE-TIME of 1.0 seconds. We expect to get about 6750 individual spectra from this observation. * Spatial Scan: The spatial scan specifies a dwell scan that is 15 dwell points by 15 dwell points in extent. The spacing between dwell points is 0.053 arc seconds (about 2 deflection steps). The time spent at each dwell point will be 30 seconds. Note that the total number of spectra is 30 one-second exposures, times 15 dwell points, times another 15 dwell points in the other direction, for a total of 6750. Figure 22.8: Exposure Logsheet for Program 5745 Figure 22.9: Geometry of Spatial Scan Examining the RAPID Data First let's look to see how many individual spectra we got: cl> imhead ../data/z2i3010at.d0h,../data/z2i3010at.c1h 1- ../data/z2i3010at.d0h[1/9024][500][real]:Z2I3010AT[1/9024] ../data/z2i3010at.c1h[1/9021][500][real]:Z2I3010AT[1/9021] The imheader task shows that we got many more spectra than expected. Further investigation turned up the fact that ten seconds is allocated for the slew from dwell point to dwell point. So we ended up having an extra ten seconds for each slew. Not all of these extra spectra are useful since the telescope was actually moving but we ended up getting about 35 seconds per dwell point instead of the anticipated 30 seconds. The imheader output also shows that there are three less spectra in the calibrated output than in the raw data. Keep in mind that since there is no substepping or FP-SPLITs possible when using RAPID mode, that a single raw spectrum maps to a single calibrated spectrum. Contrast this with a standard ACCUM with the default STEP-PATT --- the raw data will contain six individual spectra (four substeps and two background spectra), which are merged into a single calibrated spectrum. In the case of RAPID mode, the first two readouts are generated during a hysteresis sequence that precedes every observation, and the last readout is produced by a final pass deflection made at the end of every observation. These readouts contain no useful science data and are not included in the calibrated data. Manipulating a very large image can be difficult. For demonstration purposes, let's examine an arbitrary subsection of the original raw image in Figure 22.10. The gcopy task is used to extract a subset of groups from the original multi-group image. gstat computes the statistics for the new image. Figure 22.10: Manipulating a Large Image Examining RAPID data as a two-dimensional image can be instructive. A simple way to convert the multi-group subset.hhh into a 2-d image is to use the gftoxdim task. cl> gftoxdim subset.hhh subset_d2.hhh The new image is now two-dimensional: the first dimension runs from diode 1 to 500 while the second dimension marks time. cl> imhead subset_d2.hhh 1- subset_2d.hhh[500,41][real]: SUBSET_2D[1/1] cl> display subset_d2.hhh 1 z1=0 z2=40 zr- zs- We can sum the 2-d image in each dimension for further inspection: cl> blkavg subset_2d.hhh subset_sum_y.hhh b1=1 b2=41 option=sum cl> blkavg subset_2d.hhh subset_sum_x.hhh b1=500 b2=1 option=sum imhead subset_sum_?.hhh 1- subset_sum_x.hhh[500,1][real]:SUBSET_SUM_X[1/1] subset_sum_y.hhh[1,41][real]:SUBSET_SUM_Y[1/1] Figure 22.11 shows plots of three images. At the top is a greyscale plot of the extracted two-dimensional image, in the middle is a plot of the sum of the 500 diodes as a function of time, and at the bottom is a summed plot of the spectra over time. Figure 22.11: RAPID Data ------------------------------------------------------------------------------ CHAPTER 23: Calibrating GHRS Data In This Chapter... Raw Science Data Pipeline Calibration Process Calibration Steps The GHRS calibration task calhrs was developed and is maintained at STScI. calhrs is part of STSDAS. The calibration process has two basic goals: to assign flux and wavelength values to each raw data point. In addition, calhrs corrects for non-uniformity in diode response and for diodes that have been turned off (dead diodes). calhrs produces, as output, the calibrated spectra, wavelength solution, error estimates, and data quality images. The calibration software used in the RSDP pipeline to calibrate GHRS observations is the same software used within the calhrs task. The calhrs task can be found within the STSDAS hst_calib package. Raw Science Data The raw science data images are the output of the RSDP pipeline generic conversion process: .d0h, .q0h, .d1h, .x0h, .xqh, .shh, .ulh, .trl. Science Data The science data (.d0h image) contains the single-precision floating point values representing the number of detected counts accumulated for each diode. Depending on the pattern used for the observation, there may be only one, or up to six groups of science data per pattern (STEP-PATT). The science data includes the 500 science diodes. The data quality image (.q0h) associated with the science data records whether there is fill data due to technical problems with the observation or due to problems in transmitting the data from the telescope. Return-To-Brightest and Small Science Aperture ACO/PEAKUP The retum-to-brightest (RTB) target acquisition record will be placed in the .d1h image. This image will contain the total counts at each dwell point in the spiral search performed by the RTB acquisition algorithm. Small Science Aperture ACQ/PEAKUPs use the same algorithm as RTB target acquisitions. Therefore, they will also have a .d1h file whose header file contains information on which dwell point (MAPFND) contained the largest flux (FLUXFND). Also, the RTB record is extracted and placed in the trailer file (.trl). Extracted Data For ACCUM and RAPID mode observations, the extracted data image (.x0h) contains the values of the twelve special diodes in the detector (see Table 23.3) and 12 pixels of engineering data relevant to each pattern execution are appended to this image. The 12 special diodes are the focus, background monitor, and radiation monitor diodes. The data quality file (.xqh) records whether any of the data represent fill due to technical problems with the observation or telescope during the observation. Standard Header Packet (SHP) The standard header packet (image extension .shh) contains the telemetry values from the engineering data and some GHRS-unique data. The engineering data includes temperatures, currents, and voltages at various points in the instrument. The header packet also contains information used in the operation of the spacecraft, such as target name, position, and velocity of the telescope, the right ascension and declination of the target, sun, and moon, and other proposal information used in the observation which was provided in the Phase II part of the proposal. There is one group of .shp data per pattern used in the observation. Unique Data Log (UDL) The unique data log (image extension .ulh) contains the Observation Control Table used to control the aperture, detector, carousel, Digicon deflections, observing modes, and flux measurements. There are two groups of .udl data per pattern per observation. For images and standard ACCUM mode, a leading and following .udl is readout that brackets the observation. For RAPID mode, only one .udl is read out before the science observation begins. Therefore, the PKTTIMEs of the UDL observations can be used to estimate the start and stop times of the observation. Trailer File The trailer file (extension .trl) contains many messages generated by the conversion of the data from what is on-board the spacecraft into STSDAS images. These messages include RTB information for some acquisitions or the informational messages produced by calhrs as it is used to calibrate the data. Header Keywords The header files provide all the information needed to calibrate GHRS data. The headers are divided into groups of keywords that deal with specific types of information (i.e., observing information, engineering information, and processing and calibration information). Table 23.1 lists a few of the important GHRS header keywords. Table 23.1: Important GHRS Header Keywords Keyword Description/value GCOUNT Number of data groups FILETYPE shp,udl,ext,sci,img,wav,flx NBINS Number of substep bins in this pattern RPTOBS Expected number of observation repeats STEPPATT Step pattern sequence FP_SPLIT FP_SPLIT (NO, TWO, FOUR, DSTWO, DSFOUR) COMB_ADD Comb-addition (NO, TVO, FOUR, DSTWO, DSFOUR) FINCODE Observation termination code OBSMODE Observation mode (ACCUM,RAPID,SPYBAL,DEFCAL,ACQ,) DETECTOR Detector in use (1 or 2) GRATING Grating, echelle, or mirror in use APERTURE Aperture name DATE Date this file was written (dd/mm/yy) PKTFMT Packet format code KZDEPLOY COSTAR deployed for the HRS (T or F) APER_FOV Aperture field of view (arcsec), N/A for cal FGSLOCK Commanded FGS lock (FINE,COARSE,GYROS,UNKNO DATE-OBS UT date of start of observation (dd/mm/yy) TIME-OBS UT time of start of observation (hh:mm:ss) EXPTIME Exposure duration (seconds)-calculated An HST observation is composed of packets of information sent down to the ground and each observation packet has an embedded packet format code to identify the packet as an .shp, .udl, or .sci. Each science packet has a packet format code (PKTFMT) that identifies the type of science. The default GHRS calibration switches are set by the PKTFMT keyword value, specified by the type of observation (Target Acquisition, IMAGE, ACCUM, or RAPID mode). The default calibration switches have been selected to achieve the best possible calibration. The PKTFMT values are set when the observations are scheduled. They are used to determine the values of the science header keywords. This information is stated here for completeness. The eighteen GHRS calibration steps are listed and described at the end of this chapter. Pipeline Calibration Process The RSDP pipeline will only perform those calibration steps specified by the calibration switches in the raw science header (.d0h). If there are problems with the observation, it will be set aside for repair. If there are missing packets, and no further packets are forthcoming from the spacecraft, the RSDP pipeline will place fill packets into the position of the missing packets. Information about missing packets and fill data can be found in the trailer file (.trl). Calibration output files are produced, archived into the HST Data Archive, and a laser printer plot is generated for the first group in the uncalibrated and calibrated image. These laser printer plots are solely for the purpose of first look by the observer and are not intended to be publication quality or available to the public. calhrs will produce several types of calibrated output files, depending on the calibration steps performed. The calibrated output images are listed in Table 23.2. Table 23.2: Calibrated Images Data File Contents ------------------------------------------------------------------------------ .c0h Calibrated wavelength solution .c1h Calibrated science data (fluxes) .c2h Propagated statistical error .c3h Calibrated special diodes .c4h Special diodes data quality .cSh Background .cqh Calibrated science data quality ------------------------------------------------------------------------------ * Calibrated Wavelength Solution (.c0h): This image contains the wavelengths in Angstroms of the corresponding pixels in the clh file. The dimensionality and number of groups is the same as the clh file. This file is produced when ADC_CORR is set to PERFORM. * Calibrated Science Data (.c1h): This image contains the calibrated science data and is always produced by calhrs. The number of groups and dimensions of the spectra depends on the pattern used (STEP-PAT7) in the obser vation and which calibration steps are performed. The contents can range from being an exact copy of the raw science data (found in the .d0h image) to a fully flux-calibrated spectrum. * Propagated Statistical Error (.c2h): This image contains the propagated statistical error associated with the clh science data. The number of groups and dimensions of this file will be the same as the clh image. The units of the error will also be the same as the final units of the science data. * Calibrated Special Diodes (.c3h): This image contains the calibration of the special diodes whose raw values correspond to the first 12 pixels in the .xoh file. See Table 23.3. Table 23.3: Special Diodes Diode Array .c3h or xOh Description Index Pixel # ------------------------------------------------------------------------------ 1 1 Upper Left Background (corner) Diode 2 2 Lower Left Background (corner) Diode 3 3 Left High-Energy Diode 4 4 Middle Focus (disabled) Diode 5 5 Upper Left Focus Diode 6 6 Lower Left Focus Diode 507 7 Upper Right Focus Diode 508 8 Lower Right Focus Diode 509 9 Middle Focus Diode 510 10 Right High-Energy Diode 511 11 Upper Right Background (corner) Diode 512 12 Lower Right Background (corner) Diode ------------------------------------------------------------------------------ * Special Diode Data Quality (.c4h): This image contains the data quality flags for the special diodes. See Table 24.2 on page 367 for data quality flags. * Background (.c5h): This image contains the subtracted background. The units of the background will be either in counts or count rate, depending on the setting of the EXP_CORR switch. This file is produced when BCK_CORR is set to PERFORM. * Calibrated Science Data Quality (.cqh): This image contains the data quality flags associated with each pixel in the calibrated science data image. The flags are numbers; the higher the value, the worse the data in the corresponding pixel. See Table 24.2 on page 367 for data quality flags. Standard Stars The stars used as flux references are members of the set of standards established by STScI. Targets for specific observations are selected primarily on the basis of UV brightness, since we want to obtain count rates high enough to achieve adequate S/N (>50) in short exposure times. During Cycle 4, we used the ultraviolet standard stars BD+28D4211 and mu Columbae as flux references. Both stars are not known to show significant variability in the ultraviolet. BD+28D4211 is a hot white dwarf that was used for the sensitivity calibrations for the low and medium dispersion gratings. The corresponding observations for the echelle were done with the bright late-O supergiant mu Columbae. An example of such an observation is in Figure 23.1. Figure 23.1: Spectrum of BD+28D4211 Obtained with Grating G140L in Fall 1994 Observational Procedures and Data Reduction Sensitivity monitoring is done for Side 1 and Side 2 separately. The Side-1 monitor contains a series of visits of the ultraviolet standard BD+28D4211 with identical instrumental configuration (except for exposure times) for all observations. The target was acquired into the Large Science Aperture with a 5 x 5 spiral search using mirror N1, followed by a peak-up. The science observations were done with grating G140L in the ACCUM mode at two central wavelengths: 1200 A and 1500 A. For the Side 2 observations, BD+28D4211 was acquired into the LSA with a 3 x 3 spiral search using mirror N2, followed by a peak-up. Centering was confirmed by taking an image with the LSA. A series of spectra in the ACCUM mode were taken with gratings G160M, G200M, and G270M. Central wavelengths were 1200 A, 1500 A, 2000 A, 2500 A, 3000 A. This sequence was repeated approximately every three months. Relative Fluxes The results of the GHRS sensitivity monitor suggest that during Cycle 4 the GHRS sensitivity changes between 1200 A and 3000 A do not exceed about 2 percent. We find marginal evidence for a time-dependence of the sensitivity with a decline rate of about 2 percent per year. As an example we show in Figure. 23.2 flux ratios of BD+28D4211 obtained during Cycle 4 and referenced to the beginning of the cycle. (Details are in GHRS Instrument Science Report 071). The current sensitivity files used by calhrs reflect the state at the beginning of Cycle 4. We will continue to monitor both Side 1 and Side 2 for further decreases of the sensitivity. If the sensitivity decrease becomes larger than 5 percent between 1200 A and 3000 A with respect to the value at the beginning of Cycle 4, updated sensitivity files will be installed. In Figure 23.2, the five panels are for central wavelengths of 1200 A, 1500 A, 2000 A, 2500 A, and 3000 A. Each point represents ratio of the median counts measured over 20 A relative to the counts measured on SMS 94120 over the same 20 A (the first data point). Below 1200 A the sensitivity is clearly decreasing with time. This trend was marginally present in data obtained early in the Cycle and has been confirmed later in Cycle 4. The loss of sensitivity from the beginning to the end of Cycle 4 at 1 140 A is about 10 percent. Observers wishing to calibrate their Cycle 4 observations at the shortest wavelengths of the G140L grating should contact the STScI GHRS team via the Help Desk (help@stsci.edu). Figure 23.2: Side 2 Sensitivity During Cycle 4 Absolute Fluxes Fluxes on an absolute scale are known to within approximately 10 percent. The absolute flux scale used by the GHRS is tied to the system of STScI observatory standards. In May 1994, we switched over to the new, revised absolute flux scale established from observations of the white dwarf G191B2B (see GHRS Instrument Science Report 062). The new scale differs from the old one by up to 10 percent, depending on wavelength. Archived data obtained prior to May 1994 have not been recalibrated with the new flux scale. Therefore spectra of the same star taken before and after May 1994 are on a different flux scale. The absolute throughput of the LSA is poorly known. We estimate that approximately 90-95 percent of the light emitted by a point source is encompassed by the LSA. The relative throughput of the SSA with respect to the LSA was determined before and after the installation of COSTAR (see GHRS Instrument Science Report 062). Post-COSTAR values are shown in Figure 23.2. LSA throughput is assumed to be 100 percent in this figure. The relative throughput of the SSA is wavelength dependent, with higher values measured at longer wavelengths. In Figure 23.3, the circles are observations of mu Col-all five data points are based on a single SSA ACQ/PEAKUP. The crosses are for chi Lup--the first and last points are from a single ACQ/PEAKUP and the cluster of three points near 1950 A is from another. The diamonds are for AGK+81"266--each point is based on an individual ACQ/PEAKUP. The solid line is a straight line fit to mu Col and chi Lup data. Figure 23.3: Ratio of Count Rates for the post-COSTAR SSA to the post-COSTAR LSA. Note that the calibrated science data in the .c1h file take into account the different throughput for point sources of the LSA and SSA before and after the installation of COSTAR. Therefore a star observed before and after the installation of COSTAR will have the same flux although its count rate will be lower before COSTAR. Calibration Steps Each calibration step is detailed in this section along with the corresponding calibration switch and various reference files required by that step. A flowchart of the process is illustrated in Figure 23.4. Figure 23.4: GHRS Calibration Process Data Quality Initialization (DQI_CORR) Apply data quality initialization using the reference file, dqihfile, which contains a data quality flag for each diode. This step compares each data quality flag in the qOh file with the corresponding flag in the data quality initialization file (.r5h). The most severe flag is kept. Quality flags are not additive and are never decreased. The most severe data quality flags are written to the output file (.dqh). Table 24.2 defines these flags (by order of decreasing severity). Conversion to Count Rates (EXP-CORR) This step converts the input data to count rates by dividing by the exposure times. The exposure time is computed for each bin as: EXPOSURE = n_coadd x (0.05 x intper - 0.002) where: * EXPOSURE is the exposure time per bin as found in the keyword in the calibrated data headers, * intper is the integration period in 0.05 second intervals, * 0.002 is the overhead required to read out the data, * n_coadd is the number of coadds to the bin. If either value contains a fill value, no exposure time can be computed and the entire bin is flagged as unusable. The values for n_coadd and intper are read from the extracted engineering file. Diode Response Correction (DIO - CORR) The diodes within the Digicons of the GHRS do not have identical sensitivities. This step divides the count (or count rate) value by the diode's response (near unity) to correct for diode nonuniformity using the diode response file, diohfile. When comb-addition is used, a smooth diode response array is computed using a weighted average of diode responses. Data with a diode response value less than the minimum diode response value set in the ccr3 table are set to 0.0. Paired Pulse Correction (PPC - CORR) Correct the raw count rates for saturation in the detector electronics using ccg2, the paired-pulse correction table. On the first pass, this routine reads the paired pulse parameters from table ccg2. If q0 is not equal to zero, then the following equation is used: X = y/(1-yt) where: * X is the true count rate, * y is the observed count rate, * If y <= F, t is defined as: t = q0 or if y > F, then: t = q0 + q1(y - F) q0, q1, and F are in ccg2. * If q0 = 0, then the following equation is used: X = ln(1 - ty)/-t where: * t = tau1 is from the ccg2 reference table. Photocathode Mapping (MAP_CORR) This step computes where on the photocathode the spectrum was located. This mapping of the location is performed in LINE and SAMPLE space. LINE position is perpendicular to dispersion, and SAMPLE is parallel to the dispersion. This calculation is used by some of the following calibration steps. The position of the individual substep bins are mapped into photocathode space using the following: SAMPLE(bin) = x0 + b x XD + c x XD^2 DELTAS(bin) = e LINE(bin) = L0 + A x YD where: * SAMPLE is the sample position of the first diode, * DELTAS is the spacing between sample positions, * LINE is the line position of the diodes, * XD is the X-deflection minus 2048, * YD is the Y-deflection minus 2048, * s0, b, c, and e are coefficients in ccr2, interpolated for the given Y-deflection, and * L0, and A are coefficients in ccr1. Doppler Compensation (DOP_CORR) Correct for Doppler compensation when removing photocathode nonuniformities (PHC_CORR set to PERFORM) and vignetting (VIG_CORR set to PERFORM). Do not confuse this with the on-board Doppler compensation indicated in the science header by the value of the DOPPLER keyword. This step computes the percentage of time spent at each Doppler offset. These are computed by dividing the observation into time segments and computing the deflection offset for each segment. The SHP packet time is used as the start of the readout and the packet time of the first science packet is used as the ending time of the readout. Currently, this step is not applied in the Routine Science Data Processing. Photocathode Nonuniformity Removal (PHC-CORR) This step removes the photocathode granularity using a reference file that has a granularity map, phchfile. (Presently, however, the GHRS is not using this feature because obtaining the flat-field exposures for all possible grating positions is impractical. Therefore the phchfile is currently a DUMMY file that is populated with ones and zeroes. However, there are plans to obtain a G140L flatfield during Cycle 5.) This map is intended to have a granularity vector for multiple line positions. At each line position, the granularity is tabulated with a constant starting sample for all lines and a constant delta sample. To compute the response for a given line and sample, bilinear interpolation is used within the reference file. If Doppler compensation is specified (DOP_CORR = 'PERFORM'), the response is smoothed by a weighting function describing the motion of the data samples along the photocathode. (This calibration will only be known initially for a very few selected wavelength ranges. Using FP-SPLIT will generally be required for high S/N work.) Vignetting Removal (VIG_CORR) This routine removes the vignetting and low frequency photocathode response using a reference file that has a vignetting map, vighfile. This map has a vignetting vector for multiple line positions and possibly carousel positions. At each line position the vignetting response is tabulated with a constant starting sample for all lines and a constant delta sample. To compute the response for a given line and sample, tri-linear interpolation is used within the reference file over carousel position, line position and sample position. If Doppler compensation is specified (DOP_CORR = 'PERFORM'), the response is smoothed by a weighting function describing the motion of the data samples along the photocathode. Merging Substep Bins (MER_CORR) This routine merges the spectral data. Unmerged output data are just a copy of the input data. The specified STEPPATT keyword value is used by calhrs to determine whether or not the data are to be merged. Any STEPPATT that accumulates 2 or more bins of spectra as listed in Table 8.5 in the GHRS Instrument Handbook will require merging. To illustrate the merging, consider input data having values Dbin.diode for bin number bin and diode number diode. The data would look like the following: bin 1 D1.1 D1.2 D1.3 D1.4 ... bin 2 D2.1 D2.2 D2.3 D2.4 ... . . . bin 7 D7.1 D7.2 D7.3 D7.4 ... The position of the data points in the 2-dimensional data array mapped into a 1-dimensional data array are 500 x bin + diode - 1. This routine maps the data into the output array for half-stepped data as: D1.1 D2.1 D1.2 D2.2 D1.3 D2.3... And for quarter-stepped data as: D1.1 D2.1 D3.1 D4.1 D1.2 D2.2 D3.2 D4.2 D1.3... Background Removal (MDF_CORR, MNF_CORR, PLY_CORR, BCK_CORR) This step removes the background from the observed flux. The switch BCK_CORR determines whether or not background removal is done. The other switches, MDF_CORR, MNF_CORR, and PLY_CORR, determine how the background is smoothed before subtraction. 1. If the science data are composed of multiple substep bins, the sky background will be resampled by linearly interpolating adjacent smoothed background data values. The background will then be scaled by B_scale = B_res X N_aper where * B_scale is the background to subtract from the science, * B_res is the resampled sky background, * N_aper is the normalization factor to compensate for the different sizes of the apertures. The two other methods involve internal measures of the background. Both methods use the same formula for determining the background vector: B_i = 0.5 (a x U_i + b x L_i) - c x N_i + d x N_ave where: * B_i is the background at diode i, * a, b, c, d, are scattered light coefficients from table ccrb, * U_i is the upper inter-order background at diode i, * L_i is the lower inter-order background at diode i, * N_i is the net on-order count rate, * N_ave is the average of N over all science diodes. N_i is determined by N_i = D_i -0.5(U_i+L_i), and D_i is the on-order count rate at diode i. The three methods differ in how the U_i and L_i data are determined. 2. Background measured from inter-order spectra. The background is measured by the science diodes by observing the photocathode above and below the science data. U_i is set to the upper background spectrum and L_i is set to the lower background spectrum. 3.Background measured from the corner diodes. There can be up to six substep bins sampling both the upper and/or lower background diodes. The background for each corner diode is the average of all measurements for that particular corner diode: B_corner = B_ncorner/n where * corner is the corner diode identifier: UL - upper left, UR - upper right, LL - lower left, LR - lower right, * B_corner is the effective background measured by the corner diode, * B_ncorner is the individual background measurement by the corner diode, * n is the number of measurements from the corner diode. The U_i and L_i vectors are then calculated by interpolating between the corner diodes as follows: U_i = [(C - C_1) X (B_LR - B_UL)/(C_2 - C_1)] + B_UL and L_i = [(C - C_1) X (B_LR - B_UL)/(C_2 - C_1)] + B_LL where * U_i is the upper background at diode i, * L_i is the lower background at diode i, * B is explained above, * C_1 is the channel for diode i, * C_2 is the effective channel or diode for the left corner diodes, * C_2 is the effective channel or diode for the right corner diodes. The observation can specify any combination of corner diodes using the substep bin identifications (BINID) found in Table 23.4. If no specific corner diodes are specified, then all four corner diodes are used for each science substep bin. If MDF_CORR is set to PERFORM, then a median filter is applied to the background. This size of the filter box is found in table ccr3 in columns SKY_MDFWIDTH and INT_MDFWIDTH. This switch is not normally applied in RSDP. It is provided as a recalibration option. If MNF_CORR is set to PERFORM, then a mean filter is applied to the background. The size of the filter box is found in table ccr3 in the columns SKY_MNFWIDTH and INT_MNFWIDTH. This switch is not normally applied in RSDR It is provided as a recalibration option. If PLY_CORR is set to PERFORM, then a polynomial is fit to the background and the function is subtracted. The order of the polynomial is found in table ccr3 in columns SKY_ORDER and INT_ORDER. Currently the order is set to 0 but can be modified in your own copy of the ccr3 table as a recalibration option. It is possible to have all three filter options set, in which case, they are all performed in the order given above. Table 23.4: Background Diodes Used by Substepping Corner Diodes Corner Diodes BINID Determining BINID Determining Background Background ------------------------------------------------------------------------------ 8 Right and Left Upper 12 Right Upper 9 Right and Left Lower 13 Right Lower 10 Left Upper 14 Upper and Lower Left 11 Left Lower 15 Upper and Lower Right ------------------------------------------------------------------------------ Determine Wavelengths (ADC_CORR, GWC_CORR) Convert the sample positions on the photocathode to wavelengths by applying the dispersion constants using tables ccr5, ccr6, ccr7, and ccrc containing spectral order, dispersion, and thermal constants. ADC_CORR computes spectral orders and wavelengths. For first order gratings, the spectral order is set to 1. For echelle gratings, the spectral order is computed by the following formula: order = NINT[ (b x A x sin((C-carpos)/B))/(ydef-a-d x A x sin((C-carpos)/B))] where: * 0NINT is the nearest integer, * A,B,C are in table ccr5, * a, b, and d are in Table ccr5, * carpos is the carousel position, and * ydef is the Y-deflection adjusted for the proper aperture (LSA: 128 added to it, SC1: 128 subtracted from it). The wavelengths are computed by solving the dispersion relation for wavelength using Newton's iterative method. The dispersion relation is described by the following equation: s = a0 + a1m lambda +a2(m lambda)^2 + a3m + a4 lambda + a5^2 lambda + a6m lambda^2 + a7 (m lambda)^ 3 where: * m is the spectral order, * l is the wavelength, * a0,a1,... are the dispersion coefficients, * and s is the sample position. The dispersion constants are calculated in one of two ways. If the switch GWC_CORR is set to PERFORM then the dispersion coefficients are calculated from the ccrc table's set of global coefficients which define a function based on carousel position. If ADC_CORR is PERFORM but GWC_CORR is set to OMIT, then the dispersion coefficients are read from the ccr6 table, which contains the dispersion coefficients for a few carousel positions. Therefore, when GWC_CORR is OMIT, interpolation is performed between two sets of coefficients bracketing the required position, if that particular position is not in the ccr6 table. Apply the Incident Angle Correction (IAC_CORR) Adjust the zero-point of the wavelength scale for the large science aperture and the two spectral lamp apertures using table ccr8 containing incidence angle coefficients. This routine adjusts the wavelength array for the difference in incidence angle of apertures LSA, SC1, and SC2 from the SSA. Table ccr8 is searched for the correct grating, spectral order, aperture, and carousel position to obtain two coefficients, A and B. Interpolation of the coefficients (in carousel position) is used if an exact match is not found. These coefficients are then used to compute an offset using the following formula: lambda = lambda + (A+ Bs)/m where: * l is the wavelength, * A and B are coefficients from ccr8, * s is the photocathode sample position, and * m is the spectral order. Echelle Ripple Correction (ECH_CORR) If one of the echelle gratings is used, divide the flux value by the normalized grating efficiency to remove the echelle ripple using tables ccr9 and ccra, which contain echelle ripple constants. This step performs the echelle ripple removal by dividing the flux by the following echelle ripple function: ripple = gnorm x sinc(ax + b)^2 where gnorm = cos(theta + beta + delta)/cos(theta + beta - delta + e) cos(theta + beta + delta) x sin (theta + e/2) x = pie m times --------------------------------------------- sin(theta + beta + e/2) e = atan( (samp - 280.0)/f) theta = (2pie x (r_0 -carpos))/65536.0 and * m is the spectral order, * samp is the photocathode sample position, * r0, b, d, and f are grating parameters in the ccra table, and * a and ba re coefficients from the ccr9 table. The ripple function is normalized to 1.0 as the center of the order. The center of the order is defined to be the center of the photocathode at the carousel position 27492 for echelle A and 39144 for echelle B. Absolute Flux Conversion (FLX_CORR) This step converts the input flux to absolute flux units by dividing it by a sensitivity stored in the abshfile (sensitivities) and the nethfile (wavelengths for the sensitivities) files. Quadratic interpolation is used within the sensitivity file to compute sensitivities for the input wavelengths. Heliocentric Correction (HEL_CORR) Convert wavelengths to heliocentric coordinate system. This step corrects for the earth's motion around the sun and modifies the wavelengths appropriately. The wavelength correction is computed as follows: lambda = lambda_obs/(1 - (V/c)) where: * lambda_obs - is calculated by the dispersion correction (ADC_CORR) * V - is the velocity of HST in the direction of the target, * V = V_x cos alpha cos delta + V_y sin alpha cos delta + V_z sin delta * V_x, V_y, and V_z are computed from parameters in the SHP file, * alpha and delta are right ascension and declination of the target, and * c is the velocity of light. Vacuum Correction (VAC_CORR) This step converts vacuum wavelengths to air wavelengths above 2000 A. This correction is not applied in the Routine Science Data Processing. GHRS data are routinely calibrated to vacuum wavelengths. This step is provided as a recalibration option. The following formula is used: lambda_vac 131.4182 2.76249x10^8 ---------- = 1.0 + 2.735182x10^-4 + ------------ + ------------ lambda_air lambda_vac^2 lambda_vac^4 ------------------------------------------------------------------------------ CHAPTER 24: GHRS Error Sources In This Chapter... Calibration Pipeline Raw Data Quality Files Calibration Quality Files In this chapter, various error sources affecting data obtained with the GHRS are discussed. Calibration Pipeline All HST science observations are first received on the ground at White Sands, New Mexico, and are then relayed by satellite to the DCF (Data Capture Facility, after January 1996, PACCOR), Goddard Space Flight Center, Maryland. The DCF staff perform a check of the transmission using a Reed-Solomon error checking routine. When Reed-Solomon is used, the data are encrypted into a cyclic pattern. The down-linked data are then decrypted and the Reed-Solomon code will verify the pattern and corrects, if possible, those words that are not as expected. If any packets are missing, DCF will wait for the next STR (space tape recorder) dump to verify no more data packets are forthcoming. Missing data packets and the results of the Reed-Solomon check are written to the data quality accounting capsule (QAC). The data packets and QAC are transmitted to the pipeline. Each GHRS observation set will be processed through the Routine Science Data Processing (RSDP) pipeline as far as generic conversion. The output of generic conversion is what users refers to as the raw uncalibrated dataset. Furthermore, the pipeline automatically runs the calhrs software to calibrate science files, such as ACCUMS, RAPIDS, and SPYBALS. Any ACQuisition files, IMAGES, and DEFCALs are not calibrated. It is left up to the user to decide whether to recalibrate the science data at a later time. See Chapter 24, "Recalibrating GHRS Data" on page 369, for information that will help you decide when to recalibrate. Raw Data Quality Files Each science data file (.d0h), extracted data file (.x0h), and target acquisition (.d1h) have a corresponding data quality file containing flags or fill values for bad data. These quality files (.q0h,.xqh,.q1h) are created during RSDP pipeline processing and contain information extracted from the QAC or are filled during the data evaluation process. For each data point in the .d0h, .x0h, or .d1h file, there is a corresponding data point in the quality file. For each good data value, the corresponding point in the quality file will be zero. If there are missing data (data-dropout), due to tape recorder flips, down link problems, or data loss due to unforeseen problems, the RSDP pipeline will pad the corresponding data file with fill data; the corresponding data point in the quality file will be set to 16. If the data failed the Reed-Solomon error check, the corresponding data point in the quality file will be set to one. Data that fails the Reed-Solomon check are classified as suspect, and may not be bad data. Table 24.1: Pipeline Data Quality Values Quality Value Description ------------------------------------------------------------------------------ 0 Good data point 1 Reed-Solomon error 16 Fill data ------------------------------------------------------------------------------ Calibration Quality Files The calhrs task reads the raw data quality files, operates on the data using the quality flags as a discriminate, and flags the appropriate data values in the output quality files. Fill data are ignored. Reed-Solomon flagged data are calibrated, but is flagged as suspect in the output quality files. The initial error in the data is assumed to be Poisson limited. The error is propagated mathematically through the calibration process. The propagated statistical error file (.c2h) contains a measure of the statistical errors of the original data values; this file is calibrated in lock-step with the science data files. The flux data quality file (.cqh) flags bad pixel values in the calibrated flux and propagated statistical error files. The special diodes data quality files (.c4h) flags bad pixel values in the calibrated special diodes file (.c3h). Good pixel values are identified by a zero value. Table 24.2: Data Quality Flags Flag Value Description ------------------------------------------------------------------------------ Category 1: Data not useful -data values set to zero 800 Data filled 400 Dead or disabled channel 300 Severe saturation (uncertainty > 50 percent) Category 2: Data uncertain-uncertainty not indicated in error computation 190 Large saturation correction (uncertainty > 20 percent) 180 Photocathode blemish (deep) 150 Photocathode blemish (medium) 140 Photocathode blemish (small) 130 Moderate saturation correction (uncertainty > 5 percent) 100 Reed-Solomon decoding error Category 3: Data uncertain-uncertainty indicated in propagated error file 30 Dead diode contributed to comb-added data point ------------------------------------------------------------------------------ The pipeline data reduction system (PODPS) automatically assigns a wavelength scale to your GHRS ACCUMs and RAPIDs when they are reduced. Note that this default wavelength scale is the one appearing in the .c0h file that goes with the .c1h file containing fluxes, even if there is a wavelength calibration exposure (wavecal) available for that program. If you have a wavecal, you must analyze it and apply the results yourself; this is explained in the section "Recalibrating GHRS Data". This default wavelength scale is calculated using terms that depend primarily on the carousel position (i.e., the orientation of the grating that was used), but there are also terms for the temperature within the GHRS (recorded in the engineering data stream). There is also a weak time-dependent term. The default wavelength scale is good to approximately one diode rms, with contributions from various effects as enumerated in Table 24.3 (which is taken from Heap et al. 1995, PASP, 107, 871). Not listed is the error in wavelength that occurs for observations made in the Large Science Aperture (LSA). Light from the LSA strikes the gratings at a different angle than light from the SSA. The difference in wavelength can result in an error as large as 1.5 diodes. This effect is now being calibrated and will be corrected for in the future. Here and elsewhere in this chapter we will consider OSCANs and WSCANs as equivalent to ACCUMS. Both OSCANs and WSCANs are "macros" that generate a series of ACCUMs when the program is executed on the telescope. Table 24.3: Error Sources Source of Error Maximum Error (diodes) ------------------------------------------------------------------------------ Quality of dispersion coefficients 0.1 Incident angle correction, SC2 to SSA 0.1 Uncertainty in thermal and time models 1.0 Short-term thermal motions 0.4 hour^-1 Carousel repeatability 0.5 (0.17 typical) Onboard Doppler compensation effects 0.15 typical Geomagnetic image motion 0.25 Uncertainty in centering target in SSA 0.21 ------------------------------------------------------------------------------ The largest sources of uncertainty in wavelength are obviously due to the geomagnetic image motion (GIMP) and the model used to correct for thermal and time effects. We have long cautioned observers to break down long exposures into units lasting no more than five to ten minutes, in order to reduce the effects of GIMP below significant levels. Thus you should only rarely find GIMP leading to loss of resolution in the final spectrum. The thermal effects, however, include a large component that appears to be random and unable to be calibrated. It is these thermal effects that are best removed through use of a wavecal or SPYBAL. Thermal motions are just that: a motion of the overall image of the spectrum. The changes in image scale - dispersion - that occur are very small and can safely be ignored in most instances. For example, for grating G270M, we have found that the centers of our calibration spectra deviate from the default wavelength scale by no more than 100 mA (and typically about 70). The slope of a fit through the measured positions for the comparison lines relative to the default wavelength scale deviates in the center by no more than 100 mA (and typically about 70 mA), and has a slope of about 3 x 10^-4 (in dimensionless units of A per A), so that the ends deviate from the center by, typically, about 3 mA across a 40 A wide spectrum. The rms scatter of the fit is typically about 0.2 km s^-1. Not all the gratings are this good, and some other values for the quality of fit are provided in Table 24.4 Table 24.4: Quality of Default Wavelength Scale for Side 2 First-Order Gratings rms deviation of fit Deviation Slope of Grating Wavelength at center fit times (mA) (km s-1) (mA) 10^4 ------------------------------------------------------------------------------ G160M 1240 15 3.7 60 3 1400+ 3 to 7 0.6 to 1.3 20 to 70 2 to 4 G200M all 3 to 6 0.4 to 0.8 20 to 80 2 to 5 G270M all 1 to 10 0.1 to 1.3 50 to 100 2 to 10 ------------------------------------------------------------------------------ ------------------------------------------------------------------------------ CHAPTER 25: Recalibrating GHRS Data In This Chapter... How to use calhrs Selecting the "Best" Reference Images and Tables Using Wavelength Calibration Exposures FP-SPLITs 375 Flux Many users will find the standard calibration produced by the RSDP pipeline is adequate for their purposes. The RSDP pipeline uses the most up-to-date calibration reference files available at the time the observation was received. Monitoring of the GHRS has shown that the GHRS has been stable over the life time of the instrument. However, some instrumental properties have changed slightly over time. The Archive user should be aware that some GHRS observations were obtained before on-orbit calibration reference files were released to OPUS (formerly PODPS). Some calibration reference files are time tagged, indicating that they should be used with data taken within specific range of dates. The chapter on GHRS Error Sources provides guidance on the quality of default calibrations. Updated or more timely reference files sometimes become available after the data were processed. If there are unusual features in the data, or if analysis requires a high level of accuracy, or if wavecal observations were obtained with the science observation, then the user may want to determine whether or not a better calibration will be possible and recalibrate the data. The user should perform a StarView search and check the list of reference files used during RSDP pipeline processing against the recommended calibration reference files. The decision to recalibrate depends upon which calibration image or table changed, and whether that kind of correction is likely to affect the analysis. Before deciding to recalibrate, it is recommended the user retrieve the recommended and used calibration files and compare them to see if the differences are important. All the information necessary to calibrate your GHRS observations is contained in the science data header keywords. calhrs opens the header file and determines which set of calibration steps to PERFORM or OMIT and which calibration reference files and tables to use during the calibration process. The user has the option to use the current set of calibration switches and specified reference files in the header file or to alter the values of certain keywords. The STSDAS task chcalpar can be used to modify the calibration parameters simply and reliably. How to use calhrs The calibration software takes as input the raw data images (.d0h, .q0h, .shh, .ulh, .x0h, .xqh) and the calibration reference images or tables. The calibration software determines which calibration steps to perform and which reference files to use by the calibration keyword values (switches and reference files) in the header of the raw data (.d0h) file. The values of the calibration switches and reference file keywords depend on the instrumental configuration used, the date when the observations were taken, and any special pre-specified constraints. The header keyword values were populated in the raw data file in the RSDP pipeline during Generic Conversion. Reference files consist of images and tables. A calibration reference image is an STSDAS image (IRAF imtype = "hhh"). It consists of two files, an ASCII header and a binary data file. A reference table is an STSDAS format table. This table is a single binary file which may contain data of several types. By convention, the extension of a reference image begins with the letter "r" and that of a reference table begins with the letter "c". The user should determine the values of the calibration switches and reference file keywords in the raw data (or calibrated) header. Prior to calibration (.d0h), the calibration switches will have the value OMIT or PERFORM. After calibration (.c1h), the switches for completed steps will have been assigned the value COMPLETE in the header keywords of the calibrated dataset, unless the software knows the reference file is a DUMMY file, in which case the value of the switch keyword will be SKIPPED. The IRAF task imheader can be used to examine the data header file. to> imheader rootname.d0h 1+ | page An excerpt from a GHRS science data header file showing the calibration reference files and switches is presented in Figure 25.1. Figure 25.1: Excerpt from Science Data Header File Showing Calibration Reference Files and Switches The HST data headers are self-documenting. The data processing steps performed are contained within the headers as is the state of the telescope and instrumentation at the time of the observation. The trailer file (.trl) contains the history of the RSDP pipeline processing, including the history of the calibration steps executed. Selecting the "Best" Reference Images and Tables Over the past four years a number of changes have been made to both the reference images and tables and the calibration software that employs them. In general, software changes are backwards-compatible with earlier versions of reference tables. However, this has not always been the case. Consequently, the current version of the software will not properly run with old data or old reference files. This is most likely to be a problem if one is using data from before November 11, 1991, in its original form. The simplest work-around to any problem of this sort is to obtain officially processed data and the latest (appropriate) reference images and tables from the HST Data Archive. There are two ways to identify the "best" reference images and tables to use when recalibrating GHRS data. 1) The getreffile task in STSDAS is available for identifying appropriate reference images and tables for recalibration of GHRS data. These reference images and tables may be requested from STScI in the same fashion as any other non-proprietary data products. If you have any problems, send e-mail request to help@stsci. edu. This task works only at STSCI because it accesses the Calibration Database (CDBS), which is not part of STSDAS. 2) StarView can retrieve calibration images and tables from the HST Archive-the GHRS calibration form provides a simple interface for identifying the best files. Finding Appropriate Reference Files The calibration reference files used and recommended for a particular observation can be determined by performing a StarView search of the HST Archive. The user should select the option "Other Searches" upon entering StarView and select "Calibration" under the header GHRS. Input the observation rootname and perform the search to list the calibration reference files and tables used during RSDP pipeline processing and the recommended reference files for calibration. These files can be obtained from the HST catalog and Archive through a StarView search and retrieval request. Requests for Archive accounts should be sent to archive@stsci.edu. (See Chapter 4 for a description of StarView.) Information about CDBS (Calibration Data Base System) calibration reference images and tables can also be found on the STScI GHRS World Wide Web page or within the STScI anonymous FT site. Running the Software The STSDAS software runs under IRAF and is free to the astronomical community; it can be retrieved through the "Software" web page on STEIS or from the STScI anonymous FIP area. See Chapter 2 for information about setting up and using IRAF and STSDAS. In order to recalibrate your data, you need to have all the reference images and tables that are specified by the calibration switches in the science data header (.d0h). If you received your data by tape, these files are usually included. If you want to change any of the files used by the RSDP pipeline calibration software to calibrate the dataset originally, the files can be retrieved from the HST Archive. We strongly suggest copying the raw data files and calibration reference files to another directory for processing, thus preserving the original files. If you want to change the calibration switches or update the reference files, we recommend that you use the chcalpar task (in the ctools package under hst - calib). This task provides a simple and consistent method to change calibration parameters in any of the HST instrument headers. The calibration task calhrs has only two user-selectable parameters: the input and output file names. If only the input name is specified, the output filenames will have the same rootname. hr> calhrs rootname output calhrs will write out informational messages to the screen as it calibrates. These messages are saved in the trailer file (.trl) when RSDP calibrates the data. You can save them by redirecting the output into a file. Each observation mode will have calibration switches set to default values. Because some steps require that other calibration steps be completed first, there can be cases where a switch is set to PERFORM yet the step is not executed in the pipeline. In this case, the calibration switch value will remain set to PERFORM in the output product (.c*h). The calibration process can logically be thought of in terms of two distinct steps: flux calibration and wavelength calibration, The extension of the file that contains the wavelength coefficients is .c0h, while the extension of the flux calibrated image is c1h. Each calibration step is detailed in the section "Calibration Steps" in the Chapter Calibrating GHRS Data along with the corresponding calibration switch and various reference files required by that step. The flux calibration consists of the following steps: 1. Flag Dead Diodes: Identify known dead or problematic diodes to correct comb-added data values. 2. Convert to Count Rate: Divide by exposure time to convert from counts to counts/sec. 3. Correct for Diode Response: Correct for diode-to-diode variations. 4. Apply Paired-Pulse Correction: Correct deadtime in the detector electronics that results in nonlinear detector response at high count rates. 5. Correct for "Vignetting": Correct for low-frequency variations due to optical obscuration and the quantum efficiency of the photocathode. 6. Subtract Background: Calculate background, smooth, and subtract from object spectrum. By default, the smoothing of the background is the mean of the background data obtained; i.e., a single value. This mean background is then subtracted from the object spectrum. The order of the polynomial used to fit the background is written to the standard output. 7. Remove Blaze Function: Correct for the low-frequency response along an order of the echelle grating. 8. Convert to Absolute Flux: Convert corrected count rates to absolute flux. This conversion is based on ratio of the empirical count rates to a set of established fluxes of a spectrophotometric standard star. The wavelength calibration consists of the following steps: 1. Calculate Wavelengths: Calculate wavelengths based on carousel position of the grating used. A zero-order correction is also made to account for thermal drifts. 2. Apply Incidence Angle Correction: Correct the wavelength scale to account for the geometric offset between apertures. Since the default wavelengths are determined for the SSA, observations in the LSA must be adjusted. 3. Convert to Heliocentric Wavelengths: Correct wavelength scale to account for the earth's motion around the sun. Using Wavelength Calibration Exposures The standard wavelength calibration can be improved by using a wavecal or SPYBAL observation taken close to the time of the science data to correct for zero-point offset. In addition, if a wavecal observation was deliberately obtained with the same carrousel position as the science data as part of the observations, and if the science observations were not obtained as an FP-SPLIT, you may choose to re-derive the wavelength dispersion constants and use them to create a new calibrated wavelength file (.c0h file) for the science observation. When re-deriving the dispersion, you should assure that the science data and the wavecal observation were obtained at the same carousel setting by examining the value of the keyword parameter CARPOS in both files. Correcting the Zero Point Offset You can use the STSDAS waveoff task to derive a new zero point offset for the wavelength scale from either a wavecal or a SPYBAL. Currently, waveoff will print the pixel, wavelength, and sample space offsets to the screen. You can then apply the wavelength offset to the science observations by using the imcalc task to add the calculated offset to each pixel (for all the groups) in the wavelength file. See the help file for waveoff for examples on how to do this. Re-deriving the Dispersion Coefficients You can use the task zwavecal to re-derive the wavelength dispersion coefficients from a wavecal observation and create a new calibration table with these values. You can then recalibrate your data with calhrs, using the newly derived dispersion coefficients to create the calibrated wavelength file ( .c0h file) for your science observations. Note that you can only use this method if you have a wavecal observation at the same carousel position as your science data, taken close in time to your science data, and your science data were not obtained as an FP-SPLIT. You can assure that the science data and the wavecal observation were obtained at the same carousel setting by examining the value of the keyword parameter CARPOS in both files. Likewise, the value of the keyword FP-SPLIT should be set to 'NO'. cl> hselect z29h0107t.c1h,z29h0108t.c1h $I,carpos,fp-split yes z29h0107t.c1h 50680 NO z29h0108t.c1h 50680 NO Once you have run the zwavecal task, you can then use the chcalpar task (described in "Setting Parameters" on page 19) to change the value of the header keyword CCR6 (the dispersion constants reference table) in the science raw data header (.d0h) file to point to the newly created dispersion table. At the same time, change GWC_CORR to 'OMIT' and make sure that ADC_CORR is set to 'PERFORM'. Then re-run calhrs on the science observation. The calhrs task will produce a new set of calibrated files, including the new wavelength (.c0h) file reflecting the new dispersion solution. For example, if you had two observations, the first of which was a calibration lamp observation called Z29h0107t that was requested at the same carousel position as science observation z29h0108t, you could use the commands shown in Figure 25.2 to improve the wavelength solution. Figure 25.2: Improving the Wavelength Solution FP-SPLITs If your data were taken in FP-SPLIT mode, then your calibrated data will have multiple groups which contain independent subintegrations taken at slightly offset carousel positions. To obtain your final spectrum, with the full integration time, you need to combine the group spectra into a single spectrum. Recall that when taking data in FP-SPLIT mode, the grating carousel is shifted slightly between subintegrations to assure that different portions of the photocathode are illuminated each time. Thus, each FP-SPLIT group in your calibrated spectrum is shifted in wavelength space with respect to the others (see "Putting FP-SPLITs Back Together" on page 335). When the individual FP-SPLIT spectra are combined into a single spectrum, the effects of the granularity of the photocathode response are reduced, since the flux measured in a single pixel in the final spectrum will have been collected over several (FP-SPLIT) different photocathode locations. To combine the groups of an FP-SPLIT observation, you can use the two stsdas tasks hrs.poffsets and hrs.specalign. The poffsets task determines the shifts needed to align the spectra either by cross-correlating features in the individual spectra, or by using the information in the .c0h file which gives the wavelength at each pixel. The specalign task combines spectra after first shifting them to align in wavelength space. These tasks are not specific to GHRS data, but can be used on any spectra which you wish to co-align and co-add. They are, however, of particular use in combining the FP-SPLIT groups in an ACCUM mode GHRS observation since for high-signal-to-noise FP-SPLIT data, the tasks can also be used to derive the photocathode response function (i.e., the photocathode flatfield) for your observations. You can then use the photocathode response function to assess the reliability of the features in your final spectrum. A detailed description of how to use poffsets and specalign to combine the groups of an FP-SPLIT observation can be found in the help files for the tasks. Flux How the Flux Scale is Calibrated The sensitivity functions quantify the relationship between the observed flux from a target, and the count rate detected by the GHRS. For calibration purposes the fluxes of the reference stars are expressed in units of ergs/cm^2/sec/A. The raw GHRS data have units of counts per diode per second. The sensitivity functions simply show the ratio of these quantities, with no other constants, scale factors or transformations included. The post-COSTAR sensitivity functions for all GHRS gratings are in Chapter 8 of the GHRS Instrument Handbook. (See GHRS ISR 060 for the pre-COSTAR sensitivity functions.) For planning purposes, a known or estimated flux can be multiplied by the sensitivity to estimate what the GHRS count rate will be for a particular grating. During data reduction an observed count rate can be divided by the sensitivity function to calibrate the data in flux units by setting the value of the FLX_CORR switch to PERFORM in the .doh header. The sensitivity functions are dependent on several factors. The telescope contributes its unobscured geometrical collecting area, the reflectivity of both mirrors, and the fraction of the light which manages to pass through the instrument's entrance apertures. The GHRS optics introduce a finite reflectivity at each surface and transmission at each filter and window, the blaze efficiency and linear dispersion of the gratings. The detectors have an overall quantum efficiency at each wavelength, spatial gradients related to vignetting or real QE variations, isolated scratches and blemishes, pixel scale irregularities, finite sampling by the diodes, and diode to diode gain variations. To simplify this problem, the calibration is broken into several components. The basic function relates flux and count rate measured at the center of the diode array, for a star centered in the LSA, at a range of wavelengths for each grating. The echelle ripple function is quantified separately for each order. Gradients of sensitivity across the diode array are described by vignetting functions which vary with wavelength for each grating or echelle order. The SSA throughput is measured relative to the LSA at several wavelengths, but should be independent of grating mode. Blemishes are tabulated as departures from the local sensitivity for each grating. Diode response functions are tabulated as detector properties, which depend on threshold settings, but not optical modes. Finally, the pixel to pixel granularity can be identified and suppressed as photometric noise. Photometric Correction for Extended Sources When calhrs photometrically calibrates your observations, it assumes you have observed a point source, and adjusts the flux in your spectrum to account for light loss due to the PSF outside of the aperture, i.e., it returns the flux you would have seen if all of the flux from your point source fell within the aperture. Therefore, the absolute fluxes (erg/cm^2/s/A) of point sources measured through the LSA and SSA will be the same. Of course, the count rates will be lower for the SSA observation but calhrs will automatically apply a different sensitivity function to the SSA observation to account for the light loss. The properties of the GHRS apertures are presented in Table 25.1. calhrs always assumes a point source is observed and it effectively applies a correction factor for the light lost outside the aperture. If you observed an extended source, then your source does not fill the aperture as does a point source and the flux calibration from calhrs will be inappropriate. To obtain a rough estimate of the specific intensity (in ergs sec^-1 cm^-2 A^-1 arcsec^-2) multiply the observed flux by 0.95 +/- 0.02 for observations taken through the LSA and divide by the area of the aperture in square arcseconds. This assumes that the extended source completely and evenly fills the aperture. Questions about calhrs should be sent to the STScI Help Desk (help@stsci.edu). The absolute fluxes for extended sources obtained with calhrs are incorrect. See GHRS Instrument Science Report 061 for more details. Table 25.1: Properties of GHRS Apertures Clear Aperture Name (mm) pre-COSTAR post-COSTAR Shape ------------------------------------------------------------------------------ LSA 0.559 2.0" 1.74" square SSA 0.067 0.25" 0.22" square ------------------------------------------------------------------------------ ------------------------------------------------------------------------------ CHAPTER 26: Specific GHRS Calibration Issues In This Chapter... Target Acquisition Problems Carousel Properties Timing of GHRS Observations SPYBAL Calibration Anomalies Doppler Compensation Target Acquisition Problems An onboard target ACQuisition will perform a spiral search (defaults are 3 x 3 for the LSA and 5 x 5 for the SSA) followed by returning to the dwell point with the most counts. The end phase of the onboard ACQ is the locate phase (coarse Y and X centering with a fine X balance at the end of the locate phase). GHRS target acquisitions of very bright stellar targets may fail during the locate phase. This results from low contrast between the central four diodes of the eight diodes used for acquisitions. The high count rate in the central four diodes will flatten the Point Spread Function (PSF) leading to wrong centering in the aperture. Thus, lower count rate than expected in the LSA science observations, and possibly missing the target during the following SSA acquisition. The SSA science observations in this case may contain just noise. The header keyword FINCODE value indicates the success or failure of the GHRS observation. FINCODE=102 indicates the observation completed successfully as planned, but does not indicate whether the target was centered in the aperture. Any other value for FINCODE indicates a possible problem. During RSDP pipeline calibration, if a FINCODE value is encountered other than 102, a warning message is written to the trailer file (.trl). From April 1989 through August 1994, OSS (now part of OPUS) created an Observation Comment file (extension .ocx) for every HST observation. After that time through January 1995, OSS created an .ocx file for degraded observations. If an .ocx file exists for an observation, it should be checked for anomalies. (See Chapter 5 for more information.) If OMS Observation Logs were created for an observation, they should be checked to verify that no anomalies occurred during the target acquisition. Carousel Properties The carousel is rotated to engage the desired dispersive element or mirror and to place the requested wavelength at the center of the diode array. In the spring of 1991, the Side 1 carousel control electronics developed an intermittent failure and the carousel function was modified to operate the Side 1 carousel from the Side 2 electronics. Any Side 1 observation obtained after April 1991 will have different carousel function coefficients for specific wavelengths. During normal operation, the carousel is commanded to rotate to a pre-selected position depending upon the Carousel Function. The carousel may oscillate before achieving the desired position and take an appreciable amount of time to lock at that position. This may take longer than the normal amount of allocated time and thereby, following observations are affected. Before Spring 1995, if the carousel took an appreciable amount of time to lock, the following observations would not occur and be lost. After spring 1995, the flight software was updated to time out a set of GHRS observations if the end time of the observations was reached. The header keyword FINCODE will be set to 106 to indicate to the observer that a "time-out" occurred. The affected GHRS observation may be several observations down stream of the observation for which the carousel took an appreciable time to lock into position. The STSDAS task obsum in the hrs package can be used to display to the screen the carousel position and the FINCODE values. > obsum z2bd010c Table 26.1: GHRS FINCODE Values FINCODE Explanation ------------------------------------------------------------------------------ 15X-null balance failure during coarse locate 16X-null balance failure during fine locate 20 number of slews to center exceeded max 101normal beginning of observation 102normal end of observation 104 observation ended, over-exposure 105 observation ended, too many bad data 106 observation ended, time out ------------------------------------------------------------------------------ Timing of GHRS Observations Knowledge of the exact time observational activities take place onboard GHRS is not directly available. Instead, we only get timing information when data are dumped to the ground from the onboard computer from which we may infer when things happened. When data are dumped they are given time tags from the NSSC-1 computer-the spacecraft clock has a time resolution of approximately 0.125 seconds. While GHRS can operate on shorter time scales (0.050 seconds), information about these activities does not make it into the telemetry stream. Nevertheless, as an observer, you may want answers to the timing questions posed in this section. When did my Observation Start? The closest time tag to the start of the exposure is the packet time (PKTTIME) on the Unique Data Log (UDL). (The UDL is the data file with the .u1h extension.) A UDL is always dumped from the spacecraft prior to the start of an science exposure, effectively flagging the start of the observation. The PKTTIME keyword holds an MJD value good to a spacecraft clock tick (-O.125 sec). This MJD value is converted to a date and time and stored in the EXPSTART keyword. The accuracy of the start time is limited to the accuracy of the spacecraft clock--about 0.125 seconds. When did my Observations End? For ACCUM mode observations, a second UDL is dumped at the end of the observation and prior to reading out any science data. Therefore, the PKTTIME of the second group of the UDL can be used to mark the end of the observation. For observations generating multiple readouts (e.g., FP-SPLITS) the UDLs come in pairs bracketing the science exposures. The MJD value in this second UDL is converted to a time and date and placed in the EXPEND keyword. The accuracy is the same as the start time-- about 0.125 seconds. For RAPID mode observations, a second UDL is not dumped until the last spectrum has been dumped. In this case, the end of the observation is merely the time that the last spectrum has been read out and is contained in the PKTTIME for this last spectrum. Note that the PKTTIME for the science data is the tag when the science data are dumped and this must wait until the end of the exposure. How Long did my Observation Last? The extent of an observation is reported in the EXPTIME keyword. This time may not be the same as the simple difference between the EXPEND and EXPSTART. (See "Was my observation interrupted".) The exposure time in the header is simply the exposure time you requested in your proposal times the number of exposures. To verify that you got the exposure time you expected you can calculate the EXPTIME using available header keywords as shown below: ACCUM mode EXPTIME = (RPTOBS + 1) x (fpsplits) x MAXGSS x INFOB x STEPTIME where, * RPTOBS = Number of exposures - I (i.e., the number of repeats) * fpsplits = 1 (FP_SPLIT=DEF) or 2 (FP_SPLIT=TWO or DSTWO) or 4 (FP_SPLIT=FOUR or DSFOUR) * MAXGSS and INFOB contain information concerning the STEPPATT and COMB used. If you want more details about these contact a GHRS instrument scientist via the STScI Help Desk (help@stsci.edu). * STEPTIME = the step time used (by default = 0.2 seconds). RAPID mode EXPTIME = (groups) x STEPTIME where: * groups - is the number of spectra read out (i.e., the number of groups in the multi-group image. * STEPTIME - is the integration time for each spectrum (specified as SAMPLE-TIME in the proposal). What is the Exposure Time Per Pixel? The exposure per pixel is found in the EXPOSURE keyword in the calibrated flux header. This number is not equal to the EXPTIME because of the multiplicity of step pattern, FP-SPLITs and repeated observations. This value is calculated by calhrs during pipeline calibration. To double check you may calculate EXPOSURE as follows: EXPOSURE = INFOC x MAXGSS x (STEPTIME - 0.002) where, * INFOC and MAXGSS contain information about the STEPPAT7 and COMB used. (Note: INFOB and INFOC are different keywords containing similar information.) * STEPTIME = the step time specified. * 0.002 = the time in seconds of the internal overhead for integrating a single STEPTIME If you are interested in the exposure per diode, you must rebin the data from substep pixels to diodes. Was my Observation Interrupted? By design GHRS is interruptible--a given exposure may begin, be interrupted, and then resume. Observations are routinely interrupted for SAA passages and earth occultations. In general, this is of no concern. Still, there may be times when you want to know the details of a given observation. Unfortunately, it is nearly impossible to determine when an observation was actually stopped and restarted--this information is just not available in the telemetry stream. The simplest way to determine if you exposure was interrupted at all is to compare the difference of EXPEND and EXPSTART to EXPTIME. If these values differ by more than a couple of spacecraft clock ticks (~O.25 seconds) then it is likely that the exposure was interrupted and restarted during execution. The duration of the interruption can range from a few minutes when skirting the SAA to about half the orbit for an earth occultation. Additional details are available in the OMS Observation Logs as discussed in Chapter 5. It is also possible that a given observation may end prematurely. This information is encoded in the EXPFLAG keyword. We routinely see observations "time-out" due to carousel resets. If you have additional questions about interruptions, contact the STScI Help Desk (help@stsci.edu). SPYBAL To compensate for the fact that thermal drifts cause a spectrum to "move" on the photocathode, GHRS routinely performs a SPYBAL (spectrum y-balance) to properly center the spectrum on the diode array. This centering is especially important because a given spectrum is tilted with respect to the GHRS x-direction and lack of proper centering could result in the ends of the spectrum falling off the diode array. Again, this is routinely corrected for and only becomes a problem if SPYBALs are suppressed for long exposures (i.e. several orbits). More details can be found in GHRS ISR-072. A simple calculation can give us some idea of how large this effect can be. For example, the G140L spectrum of a point source may fall off the diode array due to drift. Ignoring the width of the spectrum (something on the order of the size of the SSA or about 8 deflection units) the ends of the spectrum will differ by about 45 defection units and will be within about 9 deflection units of the edge of the diode array. If we assume a worst case drift of about 25 deflection units (seen over 10 hours), we find that at the end of this time about 25 percent of the spectrum will have fallen off the edge of the diode array! In the case of an extended object uniformly filling the LSA, the effect is much more pronounced. In this case the width of the spectrum cannot be ignored. The width is equal to the size of the aperture or about 64 deflection units. For the case of a G140L observation of an extended object in the LSA, we start out with a loss of light. The spectrum is already falling off the array with the ends experiencing about 30 percent light loss. In the time it takes to drift 25 deflection units, some part or all of the spectrum may fall off the array, resulting in a significant reduction in signal! Calibration Anomalies Geomagnetically Induced Motion The displacement of the image relative to the diodes due to the earth's magnetic field, Geomagnetic Image Movement (GIM) problem, may affect a GHRS observation depending upon the length of the exposure and the HST orbit during the observation. The rate of drift of an image across the diodes is small enough that there is no significant smearing of data on time scales of five minutes or less. Very long exposures may exhibit GIM related symptoms and should be investigated by the user. No correction for GIM has been incorporated into either the operations of the GHRS or in the data reduction procedures. Dead or Noisy Diodes Each diode of the linear array (containing 512 diodes, diodes 7-506 are science diodes) is independently monitored via its own electronics chain. Diodes may exhibit anomalous behavior or fail. These diodes are grouped together into the "Dead or Noisy Diode" category. Diodes that show anomalous behavior over an extended time are turned off for science observations. In practice, the threshold voltage for anomalous diode is set to a high value so that it does not detect electrons from the photocathode. The GHRS calibration software corrects for known anomalous diodes. If anomalous absorption features are present in the calibrated data, new noisy or dead diodes may be at fault. The non-standard thresholds for detector diodes are listed in Table 26.2. Table 26.2: Non-Standard Thresholds for D1 & D2 Diodes Science Diode Diode # AMP/CH^c Threshold^d Comments Diode #^a 1-512^b ------------------------------------------------------------------------------ D1 1 24/07 46 Large background diode (50 percent peak +3) 2 24/08 44 Large background diode (50percent +3) 3 27/07 120 Gold coat radiation diode 4 00/08 255 Diode not connected by design 85 91 30/00 Dead-BDT^e r0h, r5h change May 29,1995 (95149 SMS) 123 129 22/05 50 Threshold 60 percent stops noise 262 268 15/12 255 Dead electronics-in BDT 273 279 11/04 255 Bad contact-crosstalk when contact made. In crosswalk table 436 442 00/15 52 Threshold 60 percent stops noise 445 451 00/03 44 Bad 4096 bit-in bad diode table (BDT) 487 493 08/07 255 Very noisy-in BDT 510 04/08 120 Gold coat radiation diode-not functional 511 04/07 42 Large background diode (50 percent peak +3) 512 08/08 47 Large background diode (50 percent +3) D2 1 24/07 46 Large background diode (50 percent +2) 2 24/08 44 Large background diode (50 percent +2) 3 27/07 120 Gold coat radiation diode 4 00/08 255 Diode not connected by design 80 86 30/14 Threshold 60 percent stops noise April 20, 1992, r0h 104 110 27/11 255 Bad diode-in BDT 140 146 24/13 40 Threshold 80 percent, crosstalk noise 144 150 25/12 255 Bad diode in BDT 146 152 16/09 49 Threshold 50 percent, occasionally noisy 168 174 24/10 43 Threshold 50 percent 237 243 16/00 44 Threshold 50 percent April 20, 1992; threshold 100 percent May 15,1995 273 279 11/04 255 Bad contact-crosstalk April 20, 1992 342 348 11/10 Threshold 100 percent April 20, 1992 440 446 00/14 41 Threshold 50 percent 441 447 01/02 Threshold 70 percent April 20, 1992 442 448 02/13 44 Bad 16384 bit-in bad diode table 510 04/08 120 Gold coat radiation diode 511 04/07 43 Large background diode (50 percent peak +2) 512 08/08 41 Large background diode (50 percent peak +2) ------------------------------------------------------------------------------ a. Science Diode: diodes 7-506 used for science observations. b. Diode: the entire (1-512) diode array. c. AMP/CH: Amplifier/channel, onboard electronic location of diode. d. Threshold: Discriminator threshold voltage setting for channel. e. BDT: Bad diode table. Low Count Rate Observations with low count rates, containing mostly noise, contain few detected or no counts from the target. Calibrated data may have a bell or U-shaped appearance, quantized data values, or extremely low flux values. These data indicate a missed target, a target too faint for the GHRS, or inappropriate use of the GHRS. Vignetting GHRS calibration data indicate spectra taken at the blue end of the G270M grating seem to fall off the diode array. This is due to QE gradients on the photocathode, optical vignetting, and possibly the photoelectron image of the spectrum broadened by the PSF falling off the diode array at the ends. The net variation from the center towards the ends of spectra is referred to as vignetting. Observations requiring wavelengths near the edge of a grating will suffer from vignetting problems during execution and calibration. Vignetting functions were derived for most gratings. However, it is left up to users to fully correct the data. Blemishes Scratches, pits, and other microscopic imperfections in the detector window and on the photocathode surface are referred to as blemishes. The magnitude of blemishes upon spectra depend on how the spectrum illuminates the photocathode near a blemish. Many blemishes have spatial structures and depths that would make them difficult to distinguish from real stellar or interstellar features. Therefore, it is difficult to automatically correct data for the effect of blemishes. In the absence of independent information, individual subexposures can be displayed in diode space to identify non-real spectral features. The calibration code does not correct for blemishes. However, the data quality file (.cqh) contains data quality values marking which pixels are affected by known blemishes. See Table 24.2 on page 367 for information about data quality flags. Doppler Compensation Since HST orbits the earth with a velocity of 7.5 km s^-1, spectra obtained with GHRS may see a Doppler shift of up to 15 km s^-1. The effect of the spacecraft velocity is corrected for in real time for ACCUM mode observations by deflecting the image of the spectrum an amount equal to the Doppler shift so that the spectrum appears fixed with respect to the diode array which is recording the spectrum. RAPID mode observations are not corrected for this effect. Unfortunately, it was discovered that GHRS spectra obtained prior to the end of March 1993 suffered from incorrect Doppler compensation. The problem became visible in a set of high dispersion spectra obtained with short exposure times, where one could actually see a doubling of spectral features corresponding to the different Doppler shifts applied. At the maximum required correction, the flight software was mistakenly applying zero correction. Affected data will be obvious only in extreme cases, but the problem may degrade your data even when the effect is not obvious. An on-board fix to the first problem was implemented in the flight software as of April 1993. Observations made after the update should not suffer from the Doppler compensation error. However, a cumulative error in the onboard Doppler compensation still exists, which causes the accuracy of the Doppler compensation to be reduced for long exposures. The obsum task in the STSDAS hrs package can be used to identify GHRS spectra which are potentially corrupted by the first Doppler compensation problem. This task identifies periods when the Doppler compensation should have been maximal and provides information to allow you to estimate the fraction of the data in each group that is contaminated by incorrect Doppler compensation. If a substantial fraction of the data are corrupted for a given period of time, the only recourse is to discard the affected groups and reduce the remaining good data. See the help file of obsum for a detailed description of its use. Also, all observations of moving targets made before July, 1994, were compensated incorrectly. The fix to this problem was implemented in July 1994 and moving target observations since then do not exhibit that problem. For help identifying and correcting moving target observations, please contact the STScI Help Desk via e-mail to help@stsci.edu. GHRS Point Spread Function (PSF) The post-COSTAR Point Spread Function (PSF) has a sharp core and weak wings. Observations through the SSA show a Gaussian core with a FWHM of 0.975 diodes and wings that fall off (in intensity) as r^-3 at radii larger than 1 arcsec. When this measured profile is deconvolved from the square SSA aperture, we find a sharp core with a FWHM of about 0.375 diodes-- this amounts to about 0.08 arcsec. For a detailed description of the analysis of the PSF please see, Robinson, R. "Investigating the Post-COSTAR Point Spread Function for the GHRS" in Calibrating Hubble Space Telescope: Post Servicing Mission, 1995. GHRS Line Spread Function (LSF) The LSFs for the GHRS gratings describe the instrumental broadening for a delta-function spectral feature. Knowledge of such a blurring function is necessary for quantitative studies of GHRS spectral line profiles. The resolution element (one diode) for GHRS was matched to the width of the SSA. By using substepping strategies, it is possible to get properly sampled spectra, obtaining the minimum of two sample points per resolution element. A delta-function spectra line observed through the SSA can be described by a Gaussian with a FWHM of about 0.925 diodes which is about 3.7 quarter-stepped pixels. (See Gilliland 1992, PASP, 104,367). Since an SSA spectrum is the best resolution we can obtain with GHRS, it is useful to describe the LSA LSF in terms of the SSA PSF. Consequently, we have measured the LSA-SSA differential LSF for a number of gratings and at a sample of wavelengths. This differential LSF satisfies the relationship: LSF * SSA LSA; i.e., the differential LSF is the LSF that when convolved with an observed SSA spectrum produces the best match to an identical spectrum obtained through the LSA. By combining the intrinsic LSF for the SSA with the empirical differential LSF we can obtain the intrinsic LSF for the LSA. Since the SSA and differential LSF are Gaussians, we obtain an LSA LSF that is also a Gaussian with a FWHM that is slightly greater than that of the SSA. The pre-COSTAR LSF of the GHRS was characterized by a Gaussian core nearly twice as broad as that provided by the instrumental resolution limit, provided by the SSA, with extended non-Gaussian wings. The post-COSTAR LSF for the LSA is only 19-51 percent broader in a Gaussian core than spectra from the SSA and the extended wings are absent. Additional information about the GHRS LSF can be found in GHRS ISR 063. Table 26.3 summaries the post-COSTAR LSF for the SSA and the LSA-SSA differential LSF. Table 26.3: GHRS post-COSTAR Differential LSF Grating Wavelength FWHM Differential Relative FWHM (A) LSF (diodes) (LSA/SSA) ------------------------------------------------------------------------------ G140L 1200 1.0 1.51 G140L 1500 1.0 1.51 G160M 1360 0.60 1.19 G160M 1900 0.72 1.27 G200M 1900 0.60 1.19 ECH-B 1900 0.60 1.19 ECH-B 2680 0.82 1.34 ------------------------------------------------------------------------------ Deconvolution of GHRS spectra was investigated after the spherical aberration was found in the primary mirror. With COSTAR, the need for deconvolution has become less pressing, however, for the best spectral resolution, it is possible to deconvolve LSA spectra to the level of SSA spectra. See, "The Restoration of HST Images and Spectra", (proceedings of the HST Calibration Workshop at STScI), STScI, 1990. The STSDAS task, lucy, can be used to deconvolve GHRS spectra. High Signal-to-Noise Observations It is possible to use discrete grating movements in addition to FP-SPLITs to achieve S/N of 1,000 or more. See, e.g., Lambert et al. 1994, ApJ, 420, 756. Spatial deconvolution: Both a point spread function (PSF) and a line spread function (LSF) exist for the GHRS. They are discussed in the proceedings of the HST Calibration Workshops. ------------------------------------------------------------------------------ PART 8: WIDE FIELD PLANETARY CAMERA 2 This section of the Data Handbook is a guide to working with data from the second Wide Field Planetary Camera (WFPC2). The WFPC2, which was installed during the servicing mission of December 1993, differs substantially from the original camera, and we expect that even experienced users of the original WF/PC-1 will wish to review this chapter. This handbook provides most of the information you need to process your data, however, the WFPC2 is a new instrument, and as a result, calibration and analysis techniques continue to evolve. We therefore recommend that you consult the WWW pages of the WFPC2 group to see if there are any new developments that might affect you. The structure of this part is as follows. First we provide a basic overview of the WFPC2 instrument. We then discuss the format of the data you will receive, and explain how to correlate the data with your planned observations. The standard pipeline processing and calibration performed at STScI are then described. Limitations of the calibration files and pipeline process are then discussed so that you can decide whether recalibrating the raw data would produce a better dataset than that provided by the pipeline processing. A detailed discussion of the steps involved in recalibration is then presented. We discuss various subtle issues involved in WFPC2 data processing and reduction. Finally, we provide an extensive introduction to doing photometry with the WFPC2. In this section we explain a number of problems presented by WFPC2 data along with suggested solutions. ------------------------------------------------------------------------------ CHAPTER 36: FPC2 Instrument Overview Figure 36.1 shows a schematic of the optical arrangement of the WFPC2. The central portion of the optical telescope assembly (OTA) f/24 beam is intercepted by a steerable pick-off mirror attached to the WFPC2 and is diverted through an open port entry into the WFPC2. The beam then passes through a shutter and interposable filters. A total of 48 spectral elements and polarizers are contained in an assembly of 12 filter wheels. The light then falls onto a shallow-angle, four-faceted pyramid located at the aberrated OTA focus. Each face of the pyramid is a concave spherical surface. The pyramid divides the OTA image of the sky into four parts. After leaving the pyramid, each quarter of the full field-of-view is relayed by an optical flat to a cassegrain relay that forms a second field image on a charge-coupled device (CCD) of 800 x 800 pixels. Each of these detectors is housed in a cell sealed by a MgF2 window. This window is figured to serve as a field flattener. The aberrated HST wavefront is corrected by introducing an equal but opposite error in each of the four cassegrain relays. An image of the HST primary mirror is formed on the secondary mirrors in the cassegrain relays. The spherical aberration from the telescope's primary mirror is corrected on these secondary mirrors, which are extremely aspheric; the resulting point spread function is quite close to that originally expected for WF/PC-1. Figure 36.1 shows the WFPC2 optical configuration. The optics of three of the four cameras are essentially identical and produce a final focal ratio of f/12.9. These are the Wide Field Cameras (WFC). The fourth camera, known as the Planetary Camera (PC), has a focal ratio of f/28.3. Figure 36.1: WFPC2 Optical Configuration Figure 36.2: WFPC2 Field of View Projected on the Sky As you can see, Figure 36.2 shows the field of view of WFPC2 projected on the sky. The readout direction is marked with arrows near the start of the first row in each CCD. The x,y coordinate directions are for POS-TARG commands. The position angle of V3 on the sky varies with pointing direction and observation epoch, but is given in the calibrated science header by keyword PA_V3. Table 36.1: Camera Configurations Camera Pixels Field of View Scale f/ratio ------------------------------------------------------------------------------ PC 800 x 800 36" x 36" 0.0455" per pixel 28.3 WF2,3,4 800 x 800 80" x 80" 0.0996" per pixel 12.9 ------------------------------------------------------------------------------ The Planetary Camera provides a field of view sufficient to obtain full disk images of all planets except for Jupiter. However, even with this high resolution camera, the pixels undersample the point spread function of the telescope and camera optics by a factor of two at 5800 A. The WF pixels are a factor of two larger and thus undersample the image by a factor of four at visual wavelengths. It is possible to recover some of the resolution lost to these large pixels by image dithering, i.e., taking observations at different sub-pixel offsets. A short discussion of dithering is provided in "Dithering" on page 520. Two readout modes are available on the WFPC2: FULL and AREA (the mode used for a given observation is shown in the MODE keyword). In FULL mode each pixel is read out individually. In AREA mode pixels are summed in 2 x 2 boxes before readout. The advantage of the AREA mode is that readout noise for the larger pixels is nearly same as for the unsummed pixels: 6e- vs. 5e- per pixel. Thus, AREA mode can be useful in observations of extended sources when readout electrons are the primary source of noise (often the case in the far UV). The readout direction of the four CCDs is defined so that in IRAF pixel numbering (origin at lower left corner), the origin of the CCD lies at the corner of the chip pointing towards the center of the WFPC2 pyramid (see Figure 36.2). As a result of the aberration of the primary beam, the light from sources near the pyramid edges is divided between adjacent chips, and consequently the lower columns and rows of the PC and WFC chips are strongly vignetted, as shown in Table 36.2. In this table, the CCD x,y (column,row) numbers given vary at the 1-2 pixel level because of bending and tilting of the field edge in detector coordinates due to geometric distortion in the camera. Table 36.2: Inner Field Edges of Field Projected Onto CCDs Camera Start Vignetted Field Contiguous Field Start Unvignetted Field ------------------------------------------------------------------------------ PCI x > O and y > 8 x > 44 and y > 52 x > 88 and y > 96 WF2 x > 26 and y > 6 x > 46 and y > 26 x > 66 and y > 46 WF3 x > 10 and y > 27 x > 30 and y > 47 x > 50 and y > 67 WF4 x > 23 and y > 24 x > 43 and y > 44 x > 63 and y > 64 ------------------------------------------------------------------------------ The orientation of each camera on the sky is provided by the ORIENTAT group keyword in the image headers (more on groups in the next section). ORIENTAT gives the position angle of north (from north to east) relative to the image y axis. The STSDAS task wmosaic can be used to combine all four chips into a mosaic, taking into account the slight rotation between edges of the four chips and the geometric distortion of the field (see "Mosaic WF/PC-1 Images" on page 32). ------------------------------------------------------------------------------ CHAPTER 37: WFPC2 Planned vs. Executed Observations In This Chapter... Data Files and Extensions Header Keywords Correlating Phase II Exposures with Data Files Frequently, the time between submitting your HST Phase II program and the receipt of your data can be many months. In this section we discuss how to correlate the Phase II proposal with the final data. We describe the data format used by STScI to distribute WFPC2 data, and the meanings of the header keywords that the user is likely to find most important. We then directly compare the received data with the planned exposure logsheet as we explain how to check that the data received are those you requested. Data Files and Extensions If you used strfits to read your data tape (or convert the Archive's FITS files) as described in the Chapter 3, your data will now be in GEIS format. If you do a directory listing (type dir within IRAF), you will see that the files all have a nine-character rootname and a three-character extension. For each instrument, this extension uniquely identifies the file contents. The WFPC2 extensions are listed below. Table 37.1: WFPC2 Dataset Extensions and File Sizes Extension File Contents ------------------------------------------------------------------------------ Raw Data Files .d0h/.d0d Raw science data .q0h/.q0d Data quality for raw science data .x0h/.x0d Extracted engineering data .q1h/.q1d Data quality for extracted engineering data .shh/.shd Standard header packet containing observation parameters Calibrated Data FILes .60h/.c0d dscience data .c1h/.c1d Data quality for calibrated science data .c2h/.c2d Histogram of science data pixel values .c3h/.c3d Saturated pixel map .trl Trailer file ------------------------------------------------------------------------------ Files whose extensions end with the letter "h" (e.g., u2850303p.c1h) are ASCII header files. The header files contain keywords that describe the parameters used to take the observation, the processing of the data, and the properties of the image. Files whose extensions end in the letter "d" (e.g., u2850303p.c1d) are binary data files; these files contain the data as well as the group keywords. A single GEIS image is composed of a header and data pair (e.g., the files u2850303p.c1h and u2850303p.c1d together represent a single image). A single WFPC2 exposure is obtained as four images (one image for each CCD chip). GEIS files use group format to keep all of the data from a given HST exposure together in a single image file. The data corresponding to each sub-image for the WFPC2 are stored sequentially in the groups of a single GEIS image. The header file for an image contains the information that applies to the observation as a whole (i.e., to all the groups in the image), viewable by paging the header. The group-specific (that is, chip-specific) keyword information is stored with the group data itself in the binary data file; group keywords are only accessible via STSDAS tasks (like hedit or imhead). WFPC2 images are normally 4-group images: group 1 is used for the planetary camera and groups 2, 3, and 4 are used for the wide field camera. If only a subset of chips are read out, only a subset of groups will be present. The group keyword DETECTOR lists the chip used (1 through 4), regardless of number of chips read out. Header Keywords In Table 37.2 we list header keywords found in a WFPC2 .c0h image which many observers are likely to find useful. A complete list of WFPC2 header keywords can be found on the WFPC2 pages. The STSDAS tasks hedit or imhead can be used to view any or all of the header and group keywords. WFPC2 keywords include items such as observing mode, integration time, filters and reference files used, calibration steps performed, and the properties of the data itself (e.g., number of groups, dimensions in pixels of each group, reference pixels, coordinates, scale, flux units, image statistics). Table 37.2: WFPC2 Header Keywords Keyword Description ------------------------------------------------------------------------------ Information about the groups GROUPS Multi-group image? Indicates whether data has groups. GCOUNT Number of groups per observation (1 to 4) Coordinate-related keywords CRVAL1 RA of reference pixel (deg) CRVAL2 Dec of reference pixel (deg) CRPIX1 X coordinate of reference pixel CRPIX2 Y coordinate of reference pixel CD1_1 Partial derivative of RA with respect to x CD1_2 Partial derivative of RA with respect to y CD2_1 Partial derivative of Dec with respect to x CD2_2 Partial derivative of Dec with respect to y Pixel statistics MIR_REVR Is image mirror reversed? ORIENTAT Orientation of image (deg) Bias level information (columns 3-14 of the .x0h/.x0d file) DEZERO Bias level from EED extended register BIASEVEN Bias level based on average of even columns in .x0h/.x0d file BIASODD Average bias level based on average of odd columns Pixel statistics GOODMIN Minimum value of "good" pixels (not flagged in DQF) GOODMAX Maximum value of "good" pixels DATAMEAN Mean value of "good" pixels Photometry keywords PHOTMODE Photometry mode PHOTFLAM Inverse sensitivity (units of erg/sec/cm^2/,A for 1 DN/sec) PHOTZPT Zero point (currently -21.10, if DOPHOTOM = yes) PHOTPLAM Pivot wavelength (in angstroms) PHOTBW rms bandwidth of filter (in angstroms) Image statistics keywords MEDIAN Middle data value when good quality pixels sorted HISTWIDE Width of the histogram SKEWNESS Skewness of the histogram MEANCIO Mean of a 10 x 10 region at the center of the chip MEANCIOO Mean of a 100 x 100 region at the center of the chip MEANC300 Mean of a 300 x 300 region at the center of the chip BACKGRND Estimated background level Image keywords INSTRUME Instrument used; always WFPC for either WF or PC ROOTNAME Rootname of the observation set FILETYPE SHP - standard header packet EXT - extracted engineering file EDQ - EED data quality file SDQ - science data quality file SCI - science data file MODE Mode: FULL (full resol.) or AREA (2x2 pixel summation) SERIALS Serial clocks: ON, OFF Data type keywords IMAGETYP DARK/BIAS/IFLAT/UFLAT/VFLAT/KSPOT/EXT/ECAL CDBSFILE GENERIC/BIAS/DARK/FLAT/MASK/NO Is the image a reference file and if so, type is specified Reference file selection keywords DATE Date file written (dd/mm/yy) FILTNAM1 First filter name FILTNAM2 Second filter name; blank if none FILTER1 First filter number (0-48) (Historical, but used in SOGS) FILTER2 Second filter number (0-48) FILTROT Partial filter rotation angle (degrees) LRFWAVE Linear ramp filter wavelength ATODGAIN Analog to digital gain (electrons/DN) Calibration switches MASKCORR Do mask correction: PERFORM, OMIT, COMPLETE ATODCORR Do A-to-D correction: PERFORM, OMIT, COMPLETE BLEVCORR Do bias level correction: PERFORM, OMIT, COMPLETE BIASCORR Do bias correction: PERFORM, OMIT, COMPLETE DARKCORR Do dark correction: PERFORM, OMIT, COMPLETE FLATCORR Do flatfield correction: PERFORM, OMIT, COMPLETE SHADCORR Do shaded shutter correction: PERFORM, OMIT, COMPLETE DOSATMAP Output Saturated Pixel Map: PERFORM, OMIT, COMPLETE DOPHOTOM Fill photometry keywords: PERFORM, OMIT, COMPLETE DOHISTO Make histograms: PERFORM, OMIT, COMPLETE OUTDTYPE Output image datatype: REAL, LONG, SHORT always set to REAL by PODPS Pipeline Calibration referencefiles useda MASKFILE Name of the input DQF of known bad pixels ATODFILE Name of the A-to-D conversion file BLEVFILE Engineering file with extended register data BLEVDFIL Engineering file data quality file (DQF) name BIASFILE Name of the bias frame reference file BIASDFIL Name of the bias frame reference DQF DARKFILE Name of the dark reference file DARKDFIL Name of the dark reference DQF FLATFILE Name of the flatfield reference file FLATDFIL Name of the flatfield reference DQF SHADFILE Name of the reference file for shutter shading PHOTTAB Name of the photometry calibration table SATURATE Data value at which saturation occurs (always 4095 for WFPC2, which includes the bias) Ephemeris data PA_V3 Position angle of v3 axis of HST RA_SUN Right ascension of the sun (deg) DEC_SUN Declination of the sun (deg) EQNX_SUN Equinox of the sun Fill values PODPSFF 0 (no PODPS fill), 1 (PODPS fill present) RSDPFILL Bad data fill value set in PODPS for calibrated image Exposure Information DARKTIME Estimate of darktime (in see) EQUINOX Equinox of the celestial coordinate system SUNANGLE Angle between sun and V1 axis (deg) MOONANGL Angle between moon and V1 axis (deg) SUN_ALT Altitude of the sun above earth's limb (deg) FGSLOCK Commanded FGS lock (FINE, COARSE, GYROS, UNKNOWN) Timing information DATE_OBSUT date of start of observation (dd/mm/yy) TIME_OBSU T time of start of observation (hh:mm:ss) EXPSTART Exposure start time (Modified Julian Date) EXPEND Exposure end time (Modified Julian Date) EXPTIME Exposure duration (seconds) EXPFLAG How exposure time was calculated. (NORMAL, INTERRUPTED, INCOMPLETE, EXTENDED, UNCERTAIN, INDETERMINATE, or PREDICTED) Proposal information TARGNAME Proposer's target name RA_TARG Right ascension of the target (deg) (J2000) DEC_TARG Declination of the target (deg) (J2000) PROPOSID RPS2 proposal identifier ------------------------------------------------------------------------------ a. Calibration reference file keywords ue populated even if unused. Correlating Phase II Exposures with Data Files Observations are scheduled on HST using procedures intended to maximize the efficiency of the telescope's entire observing program. Therefore, unless special timing requirements are required by your program and are specified in your Phase II proposal, visits and, on occasion, exposures within visits, may not follow the sequence written in the Phase II proposal. As a result, the data you receive on your tape may be in quite a different order from that originally proposed on your Phase II. In this section we discuss how to correlate the data you receive with the exposures you requested. If, for some reason, there is any uncertainty about which program a data file belongs to (you may, for instance, have more than one HST program running), this can be quickly checked using the PROPOSID keyword in the header file. The simplest way to then determine which data file came from a particular phase II proposal exposure line is to compare exposure information in the phase II proposal with data file header keywords. For WFPC2 data, the most useful comparisons are shown in the table below. Table 37.3: Comparing Phase II Proposal Keywords to Data Header Keywords Phase II Data Header ------------------------------------------------------------------------------ Target-Name TARGNAME Position RA_TARG, DEC_TARG Spectral Element FILTNAMI Time_Per_Exposure EXPNME ------------------------------------------------------------------------------ A convenient tool for viewing some of the most important data header keywords in an easy-to-read formatted output is the STSDAS task iminfo. An example of the output of this task in shown in Figure 37.1. Note that the data header keywords are expanded to standard English words in this output. The header file (.c0h extension) can also be examined with IRAF tools hedit or imheader, with any standard text editor, or simply by listing the contents of the file. Figure 37.1: Displaying WFPC2 Header Keywords with iminfo In the specific example shown in Figure 37.1 we see that the Proposal ID is given as 05837. A copy of the proposal can be retrieved from the STScI site using the search tool found at the URL: http://presto.stsci.edu/public/propinfo.html Enter the proposal ID into the space provided (without the leading 0) and click on [Get Program Information]. Under Program Contents you have a choice of the file you typed in during phase 2 or a formatted output. The former may be the most familiar; we reproduce a portion of that file in figure m.m. The Exposure ID listed by iminfo is 02-023. This corresponds to visit 02, exposure 23. A different format was used in cycles 0 through 4; exposures in these proposals have a single, unique numeric identifier. To reach this exposure line, page down through the proposal until Visit 02 is reached. Now search for exposure 23 in Visit 02. As shown in Figure 37.2, this exposure requested a single 5 second exposure of target CAL-GANY-W through filter F410M. A quick comparison with the keywords listed by iminfo shows that, indeed, this data file contains the observation specified in this exposure line. Figure 37.2: Exposure Log Sheet for WFPC2 A comparison of these keywords should quickly identify the data file corresponding to a given exposure line. There are, however, two cases in which such a comparison is somewhat more complicated. It is recommended that WFPC2 exposures longer than 600 seconds be split into two shorter exposures to facilitate removal of cosmic rays. If the optional parameters CR-SPLIT and CR-TOLERANCE are omitted in the phase II, and the exposure is longer than 600 seconds, it will be split. The default CR-TOLERANCE of 0.2 will be used, meaning that the split exposure times will each range from 30-70 percent of the total exposure, with their sum equal to the original total exposure time. Exposure times may also be shortened without the approval of the PI so long as the resulting S/N is 90 percent of that with the original exposure time. This may be required to fit observations into specific orbital time slots. If, after examining your exposure headers, you still have questions regarding the execution of your observing plan, we recommend you speak with your program's Contact Scientist. Once you have verified target, exposure time, and filter correctness, you will want to check on guide star status. The output generated by iminfo does not contain this information. But you can easily check by using the IRAF task hedit. The command hedit u2p60204t.c0h FGSLOCK . generates the output u2p60204t.c0h, FGSLOCK FINE. For your proposal insert the appropriate filename. Also be sure to use the final "." which causes the output to be printed to the screen. The FGSLOCK keyword can have the values FINE, COARSE, GYROS, or UNKNOWN. COARSE tracking is no longer allowed so your data will most likely either show FINE or GYROS. Gyro tracking allows a drift rate of approximately 1 mas/sec. It would only be used if requested by the proposer. FINE tracking typically holds pointing with an rm.s. error of less than 7 mas. Typically two guide stars are used in HST observations, but on occasion only one appropriate guide star can be found. Such observations will suffer from small drift rates (of order several mas per minute). If you suspect the quality of tracking obtained during your observations, please review Chapter 5, which describes how to determine the number and quality of guide stars actually used as well as how to use the OMS jitter files. ------------------------------------------------------------------------------ CHAPTER 38: Calibrating WFPC2 Data In This Chapter.. Overview of Pipeline Calibration WFPC2 Calibration Process Your data were received from the Space Telescope Data Capture Facility by the Post Observation Data Processing System (PODPS). There they are passed to Routine Science Data Processing (RSDP)-referred to as the pipeline-to be processed and calibrated. All of the steps performed by the pipeline are recorded in the trailer file for your dataset. Figure 38.1 shows an example of a trailer file and identifies comments made during the following pipeline steps: 1. The data are partitioned (separated into individual files, e.g., the engineering and science data are separated). 2. The data are edited to insert fill (an arbitrary assigned value given in the header) in place of missing data. 3. The data are evaluated to determine discrepancies between the subset of the planned and executed observational parameters. 4. The data are converted to a generic format and the header keywords populated. S. The data are calibrated using a standard @C2-specific calibration algorithm and the best available calibration files. Figure 38.1: Sample Trailer File The calibration software used by the pipeline is the same as that provided within STSDAS (calwp2). The calibration files and tables used are taken from the Calibration Data Base (CDBS) at STScI and are the most up-to-date calibration files available at the time your data were processed. All CDBS files are available to you through the HST Data Archive. Overview of Pipeline Calibration In this section we provide a schematic view of the pipeline calibration. The flow of the data through the pipeline is presented in schematic form in Figure 38.2. The pipeline calibration software (which the user can find as calwp2 in the hst-calib package) takes as input the raw WFPC2 data file pairs (see Table 37.1 on page 476) .d0h/.d0d, .qoh/.q0d, .x0h/.x0d, .q1h/.q1d and any necessary calibration reference images or tables. The software determines which calibration steps to perform by looking at the values of the calibration switches (e.g., MASKCORR, BIASCORR, etc.) in the header of the raw data (.d0h) file. Likewise, it selects the reference files to use in the calibration of your data by examining the reference file keywords (e.g., MASKFILE, BIASFELE, BIASDFILE, etc.). The appropriate values of the calibration switches and reference file keywords depend on the instrumental configuration used, the date when the observations were taken, and any special pre-specified constraints. They were initially set in the headers of your raw data file in the RSDP pipeline during generic conversion; if you decide to reprocess, they can be redefined (using hedit for example) and calwp2 run on the raw files. To determine what calibration steps the pipeline applied to your data and which calibration reference files were used to calibrate your data, you should look at the values of the calibration switches in the header of your raw (or calibrated) data. Calibration switches will have the value PERFORM, OMIT, or COMPLETE, depending on whether the step has yet to be performed, is not performed during the processing of this dataset, or was completed. As with other header keywords, the calibration keywords can be viewed using, for example, imhead or hedit. Alternately, you can use the chcalpar task in the STSDAS tools package to view the calibration keywords directly. There are history records at the bottom of the header file of your calibrated data (as well as the calibration reference file headers). These history records sometimes contain important information regarding the reference files used to calibrate your data in the pipeline. The flow chart below summarizes the sequence of calibration steps performed by calwp2, including the input calibration reference files and tables, and the output data files from each step. The purpose of each calibration step is briefly described in the accompanying table; a more detailed explanation is provided in the following section. Figure 38.2: Pipeline Processing by calwp2 Table 38.1: Calibration Steps and Reference Files Used for WFPC2 Pipeline Processing Switch Processing Step Reference File ------------------------------------------------------------------------------ MASKCORR Update the data quality file using the static bad maskfile pixel mask reference file (maskfile), which flags (r0h) defects in the CCD that degrade pixel performance and that are stable over time. ATODCORR Correct the value of each pixel for the analog-to-digital conversion error using information atodfile in the A/D lookup reference file (atodfile). (r1h) BLEVCORR Subtract the mean bias level from each pixel in the Extracted science data. Mean values are determined separately Engineering for even column pixels (group parameter BIASEVEN) File and odd column pixels File (BIASODD) because bias (x0h/q1h) levels exhibit column-wise pattern that changes over time. BIASCORR Subtract bias image reference file (biasfile) from biasfile, the input science image and update output data biasdfil quality file with bias image data quality (biasdfil). (r2h/b2h) DARKCORR Correct for dark current by scaling dark image darkfile, reference file and subtracting it from science data. darkdfil Dark image is multiplied by total dark accumulation (r3h/b3h) time (keyword DARKTIME). FLATCORR Correct for pixel-to-pixel gain variation by flatfile, multiplying by flatfield image. flatdfil (r4h/b4h) SHADCORR Remove shading due to finite shutter velocity shadfile (exposures less than 10 seconds) (r5h) DOSATMAP Create an output data quality file (.c3h) that flags pixels that saturated the A/D converter. This is redundant because saturated pixels are flagged in the DQF (.c1h). DOPHOTOM Determine absolute sensitivity using throughputs in phottab photometry calibration table (phottab). This step (cw0) does not change science data values. DOHISTOS Create 3-row image (.c2h) for each group. Row 1 is a histogram of raw science values, row 2 the A/D correction, row 3 the calibrated image. ------------------------------------------------------------------------------ WFPC2 Calibration Process Each calibration step (and the keyword switches used to turn the step on or off) is described in detail in the following sections; the steps are performed in the following order: 1. Flag static bad pixels. 2. Do analog-to-digital (A/D) correction. 3. Subtract bias level. 4. Subtract bias file. 5. Subtract dark. 6. Multiply by flatfield. 7. Apply shutter shading correction to exposures of less than 10 seconds 8. Calculate photometry keywords. 9. Calculate histograms. 10. Generate final science data quality file. Calibration Files The WFPC2 reference files used in the pipeline calibration along with their extensions, are listed in Table 38.2; the associated data quality files are given .b*. extensions. The rootname of a reference file is based on the time that the file was delivered to the Calibration Data Base System (CDBS); the file names and history of all WFPC2 reference files in CDBS (and retrievable from the HST Archive) are contained in the Reference File Memo on the world wide web; this is routinely updated with each new delivery. Any CDBS file is available for retrieval through the HST Data Archive. Table 38.2: WFPC2 Calibration Reference Files Extension Reference File ------------------------------------------------------------------------------ r0h, r0d Static mask r1h, r1d Analog to digital look-up table r2h, r2d, b2h, b2d Bias r3h, r3d, b3h, b3d Dark frame r4h, r4d, b4h, b4d Flatfield r5h, r5d Shutter shading c3f Photometry table ------------------------------------------------------------------------------ All of the installed reference files contain HISTORY keywords at the end of the header which can be viewed using the imhead task. These keywords contain more detailed information about how the file was created and installed in the database. Calibration Steps Application of the Static Mask The static mask reference file contains a map of the known bad pixels and blocked columns. If this correction is performed (MASKCORR=PERFORM), the mask is included in the calibration output data quality files. The mask reference file is identified in the MASKFILE keyword. The science data itself is not changed in any way; the STSDAS task wfixup can be used to interpolate across bad pixels flagged in the final data quality file (.c1h). AID Fixup The analog-to-digital converter takes the observed charge in each pixel in the CCD and converts it to a digital number. Two settings, or gains, of the A/D are used on WFPC2. The first converts a charge of approximately 7 electrons to a single count (called a Data Number or DN), and the second converts a charge of approximately 15 electrons to a DN (the actual value is closer to 14). A/D converters work by comparing the observed charge with a reference and act mathematically as a "floor" function. However, these devices are not perfect, and some values are reported more (or less) frequently than they would be by a perfect device. One can adjust statistically for this bias; fortunately the WFPC2 A/D converters are relatively well-behaved and this is a small correction. The best estimate of the A/D bias is removed when the ATODCORR keyword is set to PERFORM. The calibration file used to correct for the AM errors has the extension .r1h. Bias Level Removal The charges that are in each pixel sit on top of an electronic pedestal, or "bias" designed to keep the A/D levels consistently above zero. The exact level of the bias must be determined empirically using extended register pixels which do not view the sky. The values of these pixels are placed in the extracted engineering files (.x0h/.x0d). The overscan area in use is [9:14,10:790], with BIASODD being determined from columns 10, 12, and 14 and a BIASEVEN being determined from columns 9, 1 1, and 13 (this surprising nomenclature is due to an offset in the .x0h file; even and odd are correctly oriented with respect to the data file columns). A larger part of the overscan region was used for very early observations, resulting in oversubtraction of the bias level and possibly negative pixel values. Separate even and odd bias levels were only extracted after May 1996. See "WFPC2 Error Sources" on page 495 for more information on how to deal with early WFPC2 data. The keyword BLEVCORR controls the subtraction of the bias in calwp2. Bias Image Subtraction The value of the bias pedestal can vary with position across the chip. Therefore, once the bias level correction has been completed, the pipeline looks at the keyword BIASCORR. If it is set to "PERFORM", then a bias file (.r2h) is subtracted from the data to remove any position-dependent bias pattern. The bias reference file is generated from a set of A/D and bias-level corrected zero-length exposures. The correction consists of subtracting the bias file from the observation and flagging in the .c1h/.c1d file any bad pixels noted in the bias data quality file (.b2h/.b2d). Dark Image Subtraction A dark correction is required to account for the thermally-induced dark current as well as a glow (see Chapter 39) from the field flattening lens. The dark reference file is generated from ten or more individual dark frames (long exposures taken with the shutter closed) that have each had the standard calibration corrections applied (ATODCORR, BLEVECORR, and BIASCORR). In addition, each frame is examined and residual images are excluded by a mask. If a dark correction is requested, the dark reference file (which was normalized to 1 second) is scaled by the DARKTIME keyword value and subtracted from the observation. The keyword DARKCORR controls the subtraction of the dark file (.r3h). By default, DARKCORR is set to "PERFORM" for all exposures longer than 10 seconds, and to "OMIT" for shorter exposures. Flatfield Multiplication The number of electrons generated in a given pixel by a star of a given magnitude depends on the individual quantum efficiency of the pixel as well as any large scale vignetting of the field-of-view caused by the telescope and camera optics. To correct these overall variation in total quantum efficiency, the image is multiplied by a flatfield file which is currently generated from a combination of on-orbit data used to determine the large-scale structure of the illumination pattern and data taken before launch to determine the pixel-to-pixel response function. The application of the flatfield file (extension .r4h) is controlled by the keyword FLATCORR. Shutter Shading Correction The finite velocity of the shutter produces a position-dependent exposure time. This effect is only significant for exposures of a few seconds or less, and is automatically removed from all exposures less than 10 seconds. Creation of Photometry Keywords Photometry keywords, which provide the conversion from calibrated counts to astronomical magnitude, are created using the STSDAS package synphot. (More information on synphot can be found in this document, and in the Synphot Users Guide, which is available through the world wide web.) These keywords are listed in Figure 38.3, below; the first two keywords are in the ASCII header (both .d0h and .c0h) while the last five keywords are group parameters (use the IRAF tasks imheader or hedit to examine the group keywords-see Chapter 2 for more details). Figure 38.3: Photometry Keywords Histogram Creation Histograms of the raw data, the A/D corrected data, and the final calibrated output data are created and stored in the .c2h/.c2d image. This is a multigroup image with one group for each group in the calibrated data file. Each group contains a 3-line image where row 1 is a histogram of the raw data values, row 2 is a histogram of the A/D corrected data, and row 3 is a histogram of the final calibrated science data. Data Quality File Creation The calwp2 software will combine the raw data quality file (.q0h, .q1h) with the reference file data quality files (.b2h, b3h, etc.) in order to generate the calibrated science data quality file (.c1h). The flag values used are defined below. The final calibrated data quality file (.c1h) may be examined (for example, using SAOimage, ximtool, or imexamine) to identify which pixels may be bad in your science image. The bad pixels flagged in the .c1h file have not been fixed up in the .c0h file. You may wish to use the STSDAS task wfixup to interpolate across bad pixels in your science image. Table 38.3: WFPC2 Data Quality Flag Values Flag Description Value ------------------------------------------------------------------------------ 0 Good pixel 1 Reed-Solomon decoding error. This pixel is part of a packet of data in which one or more pixels may have been corrupted during transmission. 2 Calibration file defect-set if pixel flagged in any calibration file 4 Permanent camera defect. Static defects are maintained in the CDBS database and flag problems such as blocked columns and dead pixels. 8 A/D converter saturation. The actual signal is unrecoverable but known to exceed the A/D full-scale signal (4095).^a 16 Missing data. The pixel was lost during readout or transmission. 32 Bad pixel that does not fall into above categories. 128 Permanent charge trap. 256 Questionable pixel. A pixel lying above a charge trap which may be affected by the trap. 512 Unrepairable hot pixel Repairable hot pixel ------------------------------------------------------------------------------ a. Calibrated saturated pixels may have values significantly lower than 4095 due to bias subtraction. The Image After the completion of the standard pipeline processing the final image is placed in a real-valued FITS format file labelled with the extension .c0f. ------------------------------------------------------------------------------ CHAPTER 39: WFPC2 Error Sources In This Chapter.. Bias Subtraction Error Flatfield Errors Dark Current Subtraction Errors In this chapter we discuss some of the errors associated with data calibration with the intent of helping the user decide whether they can improve their data by recalibration. A number of subtle errors and problems which appear in WFPC data, but which are not directly related to data calibration, are discussed later. The pipeline calibrated your data with the most up-to-date, WFPC2-specific calibration reference files available at the time the data was pipeline processed. However, updated reference files (in particular dark files) can become available after your data were processed. Certainly if you notice unusual features in your data, or if your analysis requires a high level of accuracy, you should determine whether a better set of calibration reference files exists with which to recalibrate your data. However, finding that a calibration reference file has changed since your data were calibrated doesn't always mean that you have to recalibrate. The decision depends very much on which calibration image or table changed, and whether that kind of correction is likely to affect your analysis. Before deciding to recalibrate, you may want to retrieve the recommended and used calibration files and compare them to see if the differences are important. (You can use the table tools in the STSDAS ttools package to manipulate calibration tables; the images can be manipulated in the same ways as the science data.) In the remainder of this chapter we discuss specific calibration errors that may affect your data. Bias Subtraction Error It is now realized that proper bias subtraction requires the use of separate bias values for odd and even columns. WFPC2 data taken before May 4, 1994 did not use this form of bias subtraction. As a result a striated pattern with a typical amplitude of a few electrons (a fraction of a DN) may remain in these images. These data will benefit from recalibration unless the signal of the observation is so large that the noise statistics are entirely dominated by shot noise. The bias level for very early observations was determined using a part of the overscan region that is affected by the overall signal in the CCD, resulting in oversubtraction of the bias level. This can produce incorrect sky levels or even negative pixel values. Flat field Errors Early WFPC2 data were flattened using flatfields that did not take into account the large-scale structure of the flatfield. The offending flats have names that begin with "d" or "e1". However, if your data were taken late enough that the bias is correct, the flats used will not have suffered from this problem. Some special-purpose filters (such as the ramp filters) did not have flats made until well after the installation of WFPC2 and others still, at the time of this writing (December 1995), do not have an accurate flatfield. In these rare cases, your data will show a flatfield name in the header, but FLATCORR will not have been performed. The flatfield mentioned in these headers is a dummy flat consisting of the value 1. If flatfields have been made available since the pipeline processing of your data, you will certainly want to reprocess. New flatfields are being prepared for the WFPC2 in late 1995 and early 1996. While these will be more accurate than presently available flats, we expect that few users will need to recalibrate as a result of this change. In the optical, the new flats differ from the old by less than 1 percent over the vast majority of the chip, with the differences growing in the outer 50 pixels of the chip to about 5 percent at all wavelengths. Longward of 850 nm, differences of up to 1.5 percent are seen across the main body of the chips, and shortward of 300 nm the differences between the old and new flatfields are not yet known, but are estimated to be less than 3 percent over most of the chip. All WFPC2 flatfields are created to flatten an image of constant surface brightness. However, due to geometric distortion of the image by the optics, the area of WFPC pixels on the sky depends on location on the chip. The total variation across the chip is a few percent. Therefore, the photometry of point sources is slightly corrupted by the standard flattening procedure. This effect, and its correction, are discussed further in the later section on WFPC2 Photometry. Dark Current Subtraction Errors Electronic Dark Current At the operating temperature of -88 C, maintained after April 23, 1994, the WFPC2 CCDs have a low dark background, ranging between 0.002 and 0.01 e-/s/pixel. A relatively small number of pixels have dark currents many times the value. These hotpixels are discussed in great detail in the next section. To remove the dark current, the standard pipeline procedure takes a dark reference file (which contains the average dark background in DN/s), multiplies it by the dark time (determined by the header keyword DARKTIME), and subtracts this from the bias subtracted image. Dark correction for data taken before April 23, 1994 uses the same procedure, however the average dark current was about an order of magnitude larger before the 1994 cool down and the dark correction is therefore both more important and generally less accurate. The dark time is usually close to the exposure time, but it can exceed the latter substantially if the exposure was interrupted and the shutter closed, for example as a consequence of loss of lock. Such instances are rare and should be identified in the data quality comments for each observation, but will also be indicated by a difference between the exposure start and end time which is greater than the total exposure time. The true dark time differs from pixel to pixel because of the different time elapsed between reset and readout, but this small differential is present both in the bias image and in the observation itself, and therefore is automatically corrected for by the bias subtraction. New dark reference files are delivered on a weekly basis, but because of the necessary processing, they are usually available a week or two later than the observation itself. As a result, the dark used in the pipeline is not the same as the dark reference file recommended by StarView. The primary difference between successive darks is in the location and value of hot pixels. This difference will be most notable if a decontamination occurred between the images used to create the dark and the observation itself. However, because direct attack on the hot pixels themselves, rather than dark file subtraction, now appears the best way to remove hot pixels, many users will find that they do not need to reprocess with the most up-to-date dark file. Standard darks are based on a relatively small number (10) of exposures; this small number is used in order to track the variable hot pixels. However, these darks can be a significant component of the total noise in deep images. Observers whose images are formed from exposures totalling more than several orbits may therefore wish to recalibrate their data using the so-called superdarks, which have been generated by combining 100 individual exposures (see the reference file memo on the WFPC2 web page). The standard darks may be modified in early 1996 to include superdarks. Dark Glow While the electronic dark current is relatively stable between observations, a component of the "dark current" has been seen to vary between observations. The intensity of this dark glow is correlated with the observed cosmic ray rate and is believed to be due to luminescence in the MgF2 CCD windows under cosmic ray bombardment. As a result of the geometry of the windows, the dark glow is not constant across the chip, but rather shows a characteristic edge drop of about 50 percent. The dark glow is significantly stronger in the PC, where it dominates the total dark background, and weakest in the WF2. The average total signal at the center of each camera is 0.006 e-/s in the PC, 0.004 e-/rms in WF3 and WF4, and 0.0025 e-/s in WF2; of this, the true dark current is approximately 0.0015 e-/s, and the rest is dark glow. For more details, see the WFPC2 Instrument Handbook, Version 3.0, pages 47-49. Because of the variability in the dark glow contribution, the standard dark correction may leave a slight curvature in the background. For the vast majority of observations, this is not a significant problem, because of the very low level of the error (worst-case center-to-edge difference of 2 e-/pixel) and its slow variation across the chips. However, if an observation requires careful determination of the absolute background level, observers are encouraged to contact the WFPC2 instrument scientists directly, via their contact scientist or through the Help Desk. Hot Pixels Figure 39.1 shows a section of a PC image of a stellar field. Cosmic rays have been removed through comparison of successive images. Nonetheless, individual bright pixels are clearly visible throughout the field. About 0.3 percent of all pixels have a dark current that exceeds 0.02 e-/s, or 10 times the typical true electronic dark current. These pixels are termed hot. New hot pixels are generated continuously at a rate of about 33 pixels per CCD per day, presumably as a consequence of particle damage to the CCDs. Much of this damage is repaired at decontaminations, when the camera is warmed to about 20 degrees C for 6 hours. The total number of hot pixels does not seem to grow secularly with time. Because of the time variability of hot pixels, standard dark correction does not deal with them adequately. Even dark frames taken within a week of the observation will contain some hot pixels that vary significantly from those in the observation. For this reason, we have developed a task, known as warmpix, that will allow a user to either flag, or attempt to correct, pixels which are known to be hot at the time of the observation. Details of this procedure are found in the next chapter. Figure 39.1: PC Image of Stellar Field Showing Hot Pixels ------------------------------------------------------------------------------ CHAPTER 40: Recalibrating WFPC2 Data In This Chapter... The Standard Pipeline Calibration Beyond the Pipeline Calibrating Polarization Data In this chapter we discuss the recalibration of WFPC2 data. We present both the mechanics of recalibrating data using the standard pipeline, and variations on the standard procedure which are useful for certain types of data. The Standard Pipeline Assembling the Calibration Files In order to recalibrate your data, you need to retrieve all of the reference files and tables that are used by the calibration steps you want to perform. You will find a description of how to obtain the appropriate reference files from the STScI Archive using Starview in the "Tutorial: Retrieving Calibration Reference Files" on page 103. Standard pipeline processing uses those files listed by Starview as the best reference files. We suggest copying the raw data files and the required reference files and tables to a subdirectory that you will use for recalibration. This will preserve your original files. Setting Calibration Switches The next step recalibrating HST data is to set the calibration switches and reference keywords in the header of your raw data file (.d0h). These switches determine which calibration steps are performed and which reference files are used at each step in the process. To change the calibration header keywords in a dataset, we recommend you first use the chcalpar task in the STSDAS hst-calib.ctools package. Once you are experienced with WFPC2 recalibration, you may, like some users, prefer to use the hedit task. The chcalpar task takes a single input parameter-the name(s) of the image files to be edited. When you start the chcalpar task, the task will automatically determine the instrument used to produce that image and will open one of several parameter sets (psets) that will load it with the current values of your header keywords. The WFPC2 pset is ckwwfp2. A detailed description of the steps involved in changing header keywords follows: 1 . Start the chcalpar task, specifying the image(s) in which you want to change keyword values. If you specify more than one image, for example, using wildcards, the task will take initial keyword values from the first image. For example, you could change keywords for all WFPC2 raw science images in the current directory (with initial values from the first image), using the following command: wf> chcalpar u*.d0h 2. When you start chcalpar, you will be placed in epar--the IRAF parameter editor, and will be able to edit the pset of calibration keywords. Change the values of any calibration switches, reference files or tables to the values you wish to use for recalibrating your data. Remember that no processing has been done on the raw datasets. Therefore, even if you only wish to correct, for instance, the flatfielding, you will need to redo the bias and dark current subtraction as well. Therefore the switches for all these steps will need to be set to PERFORM. 3. Exit the editor when you are done by typing : q two times (the first :q takes you out of the pset editor; the second out of the task). The task will ask if you wish to accept the current settings. If you type "y", the settings are saved and you will return to the IRAF CL prompt. If you type "n", you will be placed back in the editor to re-define the settings. If you type "a", you will return to the IRAF CL prompt and any changes will be discarded. For additional examples of updating the calibration keywords, check the on-line help by typing help chcalpar. The calibration reference file names in the header of the raw data (i.e., the .d0h file) are typically preceded by five characters (e.g., uref$ for calibration images and utab$ for calibration tables) which are pointers to the location on disk where the files are to be found by the calibration software. Before running the calibration routines, you will need to set these variables to the path where your reference files (and .x0h/.q1h raw data files) are located. For WFPC2 data, you would use something like the following: to> set uref = "/nemesis/hstdata/caldir" to> set utab = "/nemesis/hstdata/caldir" to> set mtab = "/nemesis/hstdata/caldir" to> set ucal = "/nemesis/hstdata/caldir" While in VMS, to set the utab, for instance, one would type instead to> set utab = "DISK$SHARE:[HSTDATA.CALDIR]" where HSTDATA. CALDIR is the directory where you have stored the calibration reference files and tables. Once you have correctly changed the values of the calibration keywords in the header of the raw data file, you are ready to recalibrate your data. The WFPC2 calibration software, calwp2, is run by typing the name of the task followed by the rootname of the observation dataset. For example, to recalibrate the dataset u0w10e02t and write the log of the results to the file calwp2c.log (rather than to the screen), you would type: hr> calwp2 u0w10e02t > calwfp2.log Note that the calibration routine will not overwrite an existing calibrated file. If you run the calibration tasks in the directory where your calibrated data already exist, you will need to specify a different output file name, for example: calwp2 u00ug201t wfpc_out > wfpc.log For more information about how these routines work, use the on-line help type help calwp2. Calculating Absolute Sensitivity for WFPC2 If you set DOPHOTOM=OMIT before running calwp2, then the values of inverse sensitivity (PHOTFLAM), pivot wavelength (PHOTPLAM), RMS bandwidth (PHOTBW), zero point (PHOTZPT), and observation mode (PHOTMODE) will not be written to the header of the recalibrated data file. Remember that the DOPHOTOM calibration step does not alter the values of the data (which are always counts or data numbers in the calibrated file), but only writes the information necessary to convert counts to flux in the header of the file. Therefore, unless you wish to recalculate the absolute sensitivity for your observation (e.g., because a more recent throughput exists for your observing mode), there is no need to recompute these values and you can simply use the keyword values from your original calibrated file and apply them to your recalibrated data. However, new estimates of WFPC2 transmission and absolute sensitivity were completed in the summer of 1995. If your data were processed in the pipeline before September 1995, you may wish to re-create the absolute sensitivity parameters using the latest version of synphot, which contains tables based on the most recent photometric calibration of WFPC2. If you wish to recalculate the absolute sensitivity, set DOPHOTOM=YES in the .d0h file before running calwp2, or alternately, use the tasks in the synphot package of STSDAS. "Using Synphot" on page 44 has more information about how to use synphot. To calculate the absolute sensitivity, calwp2 and the synphot tasks use a series of component lookup and throughput tables. These tables are not part of STSDAS itself, but are part of the synphot dataset, which can be easily installed at your home site (see "Synphot Dataset" on page 71 for information about how to do this). A more detailed discussion of photometric calibration can be found in "Photometric Corrections" on page 511. You must have retrieved the synphot tables in order to recalculate absolute sensitivity for WFPC2 data using calwp2 or synphot. Calibration Beyond the Pipeline Superdarks and Hot Pixel Removal At present, pipeline darks are created from approximately 10 dark images, each of which has an integration time of about 1800 seconds. The darks are usually obtained in sets of five, which are taken about once every ten days. For programs in which the total exposure time on any given field is less than a few orbits (or equivalently about 8,000 seconds), the combined integration time of 20,000 seconds in the darks is sufficiently long that the pipeline darks will not add a significant amount of shot noise to the final image. However, in images with longer integration times, the user may wish to use superdarks. These are dark reference frames created from of order 100 dark frames. Individual hot pixels are removed from the superdarks to leave behind the large scale structure of the dark current. Hot pixels can then be removed using the newly developed warmpix task. warmpix uses tables of hot pixels that can be retrieved from STScI via the WFPC2 pages on the WWW. The user can either flag hot pixels or attempt to subtract them. Users who do not calibrate with the superdarks (and therefore may not need to rerun the calibration pipeline) may nonetheless wish to use the warmpix task to remove hot pixels not taken out by normal dark subtraction. We expect the warmpix task to be distributed in early 1996. If your implementation of STSDAS does not include this task, please contact the STScI Help Desk at help@stsci.edu for more information on how to obtain it. Use of the superdarks requires the observer to also use superbias files. These are not particularly lower in noise than those used in the pipeline, but are the bias files that were used in the creation of the superdark reference files. A user wishing to recalibrate using superdarks should therefore: 1. Obtain all of the recommended calibration reference files recommended by the Starview reference screen, except for the bias and dark files. 2. Determine the appropriate superbias, superdark and warm pixel tables using the WFPC2 WWW documentation page. 3. Obtain the superbias and superdark files from the Archive by using the data file name. Obtain the warm pixel file from the WFPC2 web page explaining reference files. 4. Recalibrate the data using the standard technique described earlier in this chapter, but use the superbias and superdark in place of the recommended bias and dark files. 5. Run the warmpix task on the data. A user who wishes to remove hot pixels from data which has undergone the standard pipeline processing can also run warmpix. If you wish to flag only hot pixels, you will need only provide the task with the calibrated image, the data quality (.dqf) file, and the warm pixel list. If, however, you wish to subtract the best estimate of the dark current in the hot pixels, the warmpix task will also require the dark and flatfield images used to calibrate the data. A very small percentage of hot pixels in any observation will not be found in the hot pixel lists. You can remove a large fraction of these few remaining pixels by breaking the GEIS file up into separate chips and processing them with the cosmicrays task, which can separate most hot pixels from stellar PSFs even in the WF chips. However, if hot pixels are likely to be a concern in your observations, you may wish to dither them. This should be considered when planning your observation. Calibrating Polarization Data Work to calibrate the WFPC2 polarizers is still underway. In this section, we provide a preliminary sketch of how polarizer data are expected to be calibrated. Polarization measurements require that observations be taken with the polarizers set to three or more angles relative to the target. This can be achieved in a number of ways, by either using different quads of the polarizer filter, rotating the polarizer filter, rotating the spacecraft, or combinations of these methods. (A discussion of polarization observations strategies can be found in WFPC2 Instrument Science Report 95-01 "WFPC2 Polarization Observations: Strategies, Apertures, and Calibration Plans" by Biretta and Sparks.) Initial calibration proceeds in the same way as with any other images. The data are bias, dark, and flatfield corrected in the normal manner. The flatfield reference files (provided by STScI) will be scaled such that an unpolarized target gives the same total counts regardless of which polarizer quad, or which filter rotation is used, hence observers will not need to make separate corrections for quad or rotation. Next, the polarization properties of the source are derived by effectively plotting measured counts vs. polarizer position angle on the sky. This can be done either using the total counts for the target, or by aligning the images and plotting the counts on a pixel-by-pixel basis (e.g., for an extended source). The polarizer position angles are then derived from the PA_V3 parameter in the image headers We anticipate that small corrections will be needed for the polarizer position angles (derived from on-obit calibrations on polarized targets), and will depend on which filter quad and rotation was used. Finally, sine curves are fit to these plots. The count rates at the maxima and minima of the fitted curves are noted, and are then used with synphot filters polq_par and polq_per to derived the corresponding fluxes, which are in turn used to derive the total flux and fractional polarization. (It is possible that corrections will be needed for the poor blocking of the perpendicular polarizer, when converting from counts to fluxes; this remains to be determined.) The polarization angle of the target will then be derived from the phase of the fitted sine wave. Calibrations should be completed early in 1996, and a report will then be available giving details of the analysis methods. The availability of this report will be announced both in the WFPC2 Electronic Newsletter and on the WFPC2 web pages. ------------------------------------------------------------------------------ CHAPTER 41: Specific WFPC2 Calibration Issues In This Chapter... The Zeropoint Photometric Corrections Miscellaneous Photometric Corrections Further WFPC2 Reduction Issues Dithering WFPC2 Image Anomalies This section is a practical guide to photometry with WFPC2. We discuss how to accurately determine the zeropoint, photometric corrections that should be made to WFPC2 data, and common problems and their solution. We start with the most important aspects of the photometric calibration that affect all observers, largely independently of the final accuracy desired, and in later sections consider subtle effects that can produce relatively small errors. A relatively simple calibration will produce photometric errors of 5 to 10 percent. With attention to more subtle effects, photometric accuracy between 2 and 5 percent may be achieved. The Zeropoint The zeropoint of an instrument, by definition, is the magnitude of an object that produces one count (or data number, DN) per second. The magnitude of an arbitrary object producing DN counts in an observation of length EXPTIME is therefore m = -2.5 x log10(DN / EXPTIME) + ZEROPOINT It is the setting of the zeropoint, then, which determines the connection between counts and a standard photometric system (such as Cousins RI), and in turn between counts and astrophysically interesting measurements such as the flux incident on the telescope. There are several ways to determine the zeropoint: 1. Do it yourself. Over the past year, a substantial amount of effort has gone into obtaining accurate zeropoints for all of the filters used on HST. Nonetheless, if you have good ground-based photometry on objects in your HST field, you may wish to use your own photometry to determine the zeropoint of your observations. This approach may be particularly useful when you are trying to convert HST observations to a non-standard photometric system. 2. Use a summary list: Holtzman et al. (1995b) have published an excellent summary of WFPC2 photometry. This includes zeropoints based on observations of Omega Cen for the five main broad band colors (i.e., F336W, F439W, F555W, F675W, F814W), as well as synthetic photometry for most other filters. Transformations from the WFPC2 filter set to UBVRI are included. The paper also includes a cookbook section describing in detail how to do photometry with WFPC2. This paper is available from the STScI or by sending e-mail to help@stsci.edu. 3. Use the PHOTFLAM keyword in the header of your data: The simplest way to determine the zeropoint of your data is using the PHOTFLAM keyword in the header of your image. PHOTFLAM is the flux of a source with constant flux per unit wavelength (in erg s^-1 cm^-2 A^-1) which produces a count rate of 1 DN per second. This keyword is generated by the synthetic photometry package synphot, which you may also find useful for a wide range of photometric and spectroscopic analysis. The procedure for converting PHOTFLAM to a standard zeropoint is described below. The synphot package was updated in August 1995 to match the numbers in Holtzman's list and the WFPC2 Instrument Handbook (Version 3.0, June 1995). synphot now provides accuracies of a few percent in nearly all cases, with a scatter of only 2 percent. Prior to this update, the scatter was about 8 percent for most filters, with a few UV filters being considerably worse. Changes to values of PHOTFLAM resulting from this update range from +44 percent (F160BW) to -18 percent (F170W), with more typical values between +15 percent to -5 percent in the visible and infrared. Table 41.1 lists the new values for PHOTFLAM. If your data were processed before September 1995, your header may contain the old value. You can obtain the new value of PHOTFLAM from the table, or if your system administrator has obtained the newest synphot tables, you can use the bandpar task in synphot to easily obtain PHOTFLAM for any of the WFPC2 filters and chips. The STSDAS tables are available from STEIS. If you're using FTP, they are in the directory /cdbs/comp/wfpc2. The README file in this directory contains information on which files to retrieve. The ST magnitude corresponding to a given PHOTFLAM is, by definition: ZP_STMAG = -2.5 x log10(PHOTFLAM) -21.1 The zeropoint differs substantially from the values in Holtzman et al (1995b) because it is based on the ST magnitude system (see below), while the Holtzman et al. values attempt to approximate magnitude in the Johnson-Cousins system. An additional difference of roughly 0.85 mag from that of Holtzman et al., for two reasons: 1) Those authors assume a gain of 14 has been used for the observations rather than a gain of 7, and, 2) they use an aperture of 0.5", which results in a further correction of roughly 0.10 mag. The ST magnitude system is unique in being based upon a spectrum which is flat in flambda. The Johnson-Cousins UBVRI system is based upon the spectrum of Vega, and AB magnitudes (such as the Gunn system) assume a spectrum flat in fv. The correction to either of these systems is relatively simple using synphot, and the actual correction factors are small when converting to Johnson-Cousins. For instance to determine the difference in zeropoint between F814W filter and the Cousins I band for a KOIII star on WF3 using the gain=7 setting you can type: calcphot "band(wfpc2,3,a2d7,f814W)" crgridbz77$bz_54 stmag where the Bruzual stellar atlas is being used (file = crgridbz77$bz_54), and obtain the output: sy> calcphot "band(wfpc2,3,a2d7,f814W)" crgridbz77$bz_54 stmag Mode = band(wfpc2,3,a2d7,f814W) Pivot Equiv Gaussian Wavelength FWHM 7982.044 1507.155 band(wfpc2,3,a2d7,f814W) Spectrum: crgridbz77$bz_54 VZERO STMAG Mode: band(wfpc2,3,a2d7,f814W) 0. -15.1045 and compare this with calcphot "band(cousins,I)" crgridbz77$bz_54 vegamag Mode = band(cousins,I) Pivot Equiv Gaussian Wavelength FWHM 7891.153 898.879 band(cousins,I) Spectrum: crgridbz77$bz_54 VZERO VEGAMAG Mode: band(cousins,I) 0. -16.3327 Which shows that for a star of this color, the correction is 1.2 magnitudes (note that nearly all of this offset is due to the definition of stmags; the F814W filter is a very close approximation to the Johnson-Cousins I, and color terms between these filters are very small). More details on the use of synphot can be found in the Synphot Users Guide. Table 41.1: New Values of PHOTFLAM and Zeropoints Filter PHOTFLAM (Old) PHOTFLAM (New) ZP (STMAG) ZP(Vega) ------------------------------------------------------------------------------ F16OBW 3.990 E-15 5.747 E-15 14.501 mag 14.737 mag F17OW 1.789 E-15 1.471 E-15 15.981 16.287 F218W 9.060 E-16 9.997 E-16 16.400 16.506 F255W 5.210 E-16 5.308 E-16 17.088 16.985 F336W 5.737 E-17 5.675 E-17 19.515 19.399 F38OW 2.387 E-17 2.517 E-17 20.400 20.962 F39ON 6.419 E-16 6.480 E-16 16.871 17.552 F410M 9.122 E-17 1.022 E-16 18.876 19.650 F437N 6.945 E-16 7.313 E-16 16.740 17.308 F439W 2.558 E-17 2.964 E-17 20.220 20.887 F45OW 8.136 E-18 8.863 E-18 21.531 22.018 F467M 4.780 E-17 5.729 E-17 19.505 20.002 F469N 4.298 E-16 5.277 E-16 17.094 17.571 F487N 3.401 E-16 3.936 E-16 17.412 17.392 F502N 3.053 E-16 3.001 E-16 17.707 17.990 F547M 7.410 E-18 7.649 E-18 21.691 21.689 F555W 3.128 E-18 3.459 E-18 22.553 22.573 F569W 3.844 E-18 4.131 E-18 22.360 22.268 F588N 5.396 E-17 6.090 E-17 19.438 19.203 F606W 1.692 E-18 1.862 E-18 23.225 22.933 F622W 2.511 E-18 2.778 E-18 22.791 22.392 F63IN 8.886 E-17 9.210 E-17 18.989 18.531 F656N 1.035 E-16 1.392 E-16 18.541 17.765 F658N 8.481 E-17 1.032 E-16 18.865 18.103 F673N 5.994 E-17 5.995 E-17 19.456 18.781 F675W 2.597 E-18 2.878 E-18 22.752 22.077 F702W 1.690 E-18 1.852 E-18 23.231 22.469 F785LP 4.792 E-18 4.740 E-18 22.211 20.739 F791W 2.814 E-18 2.905 E-18 22.742 21.554 F814W 2.399 E-18 2.480 E-18 22.914 21.688 F85OLP 8.786 E-18 8.325 E-18 21.599 20.002 F953N 2.991 E-16 2.551 E-16 17.883 16.076 F1042M 1.973 E-16 1.936 E-16 18.183 16.309 ------------------------------------------------------------------------------ Photometric Corrections A number of corrections must be made to WFPC2 data to obtain the best possible photometry. Some of these, such as the corrections for UV throughput variability are time dependent, and others, such as the correction for the geometric distortion of WFPC2 optics, are position dependent. Contamination (Time Dependent) Contaminants adhere to the cold CCD windows of the WFPC2. Although these typically have little effect upon the visible and near infrared performance of the cameras, the effect upon the UV is quite dramatic, and can reach values of about 30 percent after 30 days for the F16OBW filter. These contaminants are largely removed during periodic warmings of the camera and fortunately, in between these decontaminations, the effect upon photometry is both linear and stable, and can be removed using values regularly measured in the WFPC2 calibration program. Table 41.2 shows the contamination rates measured for PCI and WF3 and Table 41.3 provides decontamination dates up until June 1995. Updated lists are kept on the WFPC2 pages. Note that only the PC1 and WF3 have been carefully monitored during cycle 4, except in F17OW where all chips have been monitored weekly. All four chips will be monitored in cycle 5. For the present, we recommend using the values for WF3 for the other WF chips. In addition, the contamination rate has only been monitored using a single star (a white dwarf) at a single position on the chip. We are currently examining observations of Omega Cen to determine how position on the chip, spectral type, and aperture size affects the results. The contamination rates reported in Table 41.2 use the standard 0.5" aperture, hence they are slightly different from the rates listed in the WFPC2 Instrument Handbook, Version 3.0. The synphot package can be used to determine the effect of contamination on your observations. For example, to compute the expected countrate for a WF3, F218W observation taken 20 days (MJD=49835.0) after the April 8, 1995 decontamination, with the gain=7 setup, one can use, for instance: calcphot "wfpc2,3,f218w,a2d15,cont#49835.0" spec="bb(8000)" form=counts Removing the cont#49835.0 from the command will determine the countrate if no contamination was present. An 8000 K black body spectrum was chosen largely as a matter of simplicity--the correction values for contamination depend only on the filter chosen and do not reflect the source spectrum. Table 41.2: Contamination Rates (Fractional Loss per Day) PC1 WF3 Filter ------------------------------------------------------------ Rate Error Rate Error ------------------------------------------------------------------------------ F16OBW 0.00902 +- 0.00112 0.01293 +- 0.00192 F17OW 0.00540 +- 0.00047 0.01021 +- 0.00036 F218W 0.00468 +- 0.00032 0.00866 +- 0.00040 F255W 0.00233 +- 0.00028 0.00472 +- 0.00034 F336W 0.00031 +- 0.00030 0.00202 +- 0.00045 F439W 0.0000 0.00084 +- 0.00039 F555W 0.0000 0.00054 +- 0.00030 F675W 0.0000 0.0000 F814W 0.0000 0.0000 ------------------------------------------------------------------------------ Table 41.3: Decontamination Dates Year.Day::Hour::Sec Month-Day-Year Modified Julian Date ------------------------------------------------------------------------------ 1994.114:01:22 Apr-24-1994 49466.06 1994.144:00:08 May-24-1994 49496.01 1994.164:17:35 Jun-13-1994 49516.73 1994.191:18:13 Jul-10-1994 49543.76 1994.209:13:45 Jul-28-1994 49561.57 1994.239:16:19 Aug-27-1994 49591.68 1994.268:07:19 Sep-25-1994 49620.30 1994.294:07.14 Oct-21-1994 49646.30 1994.323:18:02 Nov-19-1994 49675.75 1994.352:06:33 Dec-18-1994 49704.27 1995.013:16:47 Jan-13-1995 49730.70 1995.043:02:27 Feb-12-1995 49760.10 1995.070:15:03 Mar-11-1995 49787.63 1995.098:11:02 Apr-08-1995 49815.46 1995.127:01:46 May-07-1995 49844.07 1995.153:19:03 Jun-02-1995 49870.79 1995.178:20:33 Jun-27-1995 49895.86 ------------------------------------------------------------------------------ April 23,1994 Cool Down (Time-Dependent) The temperature of the WFPC2 was lowered from -76 C to -88 C on April 23, 1994, in order to minimize the CTE problem. Besides increasing the contamination rates (see above) this also improved the photometric throughput, especially in the UV. Table 41.4 provides a partial list of corrections to Table 41.1 for the pre-cool down corrections. Including the MJD in a synphot calculation using up-to-date tables will provide an accurate estimate of PHOTFLAM. Table 41.4: Pre-Cooldown Throughput Relative to Post-Cooldown Filter PC PC Mag WF WF mag ------------------------------------------------------------------------------ F16OBW 0.865 -0.157 0.895 -0.120 F17OW 0.910 -0.102 0.899 -0.116 F218W 0.931 -0.078 0.895 -0.120 F255W 0.920 -0.091 0.915 -0.096 F336W 0.969 -0.034 0.952 -0.053 F439 0.923 -0.087 0.948 -0.058 F555W 0.943 -0.064 0.959 -0.045 F675 0.976 -0.026 0.962 -0.042 F814W 0.996 -0.004 0.994 -0.007 ------------------------------------------------------------------------------ PSF Variations (Time Dependent) The point spread function (PSF) of the telescope varies with time, and this can affect photometry using very small apertures and PSF fitting. Changes in focus are observed on an orbital timescale due to thermal breathing of the telescope and due to desorption, which causes a continual creeping of the focal position of about 0.85 microns per month. About twice a year, the focal position of the telescope is moved by several microns to remove the effect of the desorption. In addition, jitter, or pointing motion, can on occasion alter the effective PSF. The Observatory Monitoring System (OMS) files provide information on telescope jitter during observations (see Chapter 5). These files are now regularly provided to the observer with the raw data. Archival data taken after October 1994 have jitter files in the Archives. Limited requests for OMS files for observations prior to October 1994 can be handled by the STScI Help Desk (e-mail help@stsci.edu). Charge Transfer Efficiency (Position Dependent) Shortly after launch it was discovered that WFPC2 had a substantial charge transfer efficiency (CTE) problem: objects appeared to be about 10 percent fainter when observed at the top of the chip (y=800) compared to when they were observed at the bottom of the chip (y ~ 0). The April 23, 1994, cool down reduced the CTE problem to about a 4 percent effect peak-to-peak (Holtzman, 1995b). The effect appears to be smaller, or nonexistent, in the presence of a moderate background. A simple linear ramp provides an approximate correction, reducing the peak-to-peak deviations to about I to 2percent. For observations after the cool down, ignoring CTE when doing photometry of a random set of bright stars affects the scatter at the 0 to 1.5 percent level. New calibration observations are being made in Cycle 5 to characterize the effect more precisely, and determine whether a preflash can minimize or remove it. Geometric Distortion (Position Dependent) Geometric distortion near the edges of the chips results in a change of the surface area covered by each pixel. The flatfielding corrects for this so that surface photometry is unaffected. However, this means that integrated point-source photometry using a fixed aperture will be affected. This introduces a 1 to 2 percent effect near the edges with a maximum of about 4-5 percent in the corners. A correction image has been produced and is available from the Archive (f1k1552bu.r9h). Gain Variance (Position Dependent) The absolute sensitivities of the four chips differ somewhat. Flatfields have been determined using the gain=14 setup, normalized to 1.0 over the region [200:600,200:600]. However, most science observations are taken using the gain=7 setup. As the gain ratio varies slightly from chip to chip, PHOTFLAM values will be affected. The count ratios for the different chips from Holtzman are: * PC1: 1.987 * WF2: 2.003 * WF3: 2.006 * WF4: 1.955 These count ratios should be included in your zeropoint calculation if using values from Holtzman (see Table 41.5). If you use the value of PHOTFLAM from the header to determine your zeropoint, the different gains for the different chips will already be included. Remember to use the new PHOTTLAM values provided in Table 41.1 or the post July 1995 synphot tables; those included in the header for data taken before mid July 1995 will have less accurate values. Pixel Centering (Position Dependent) Small, sub-pixel variations in the chip quantum efficiency affect the photometry. The position of star relative to the sub-pixel structure of the chip is estimated to have a ~1 percent effect on the photometry. At present there is no way to correct for this effect. Miscellaneous Photometric Corrections Aperture Correction It is frequently difficult to directly measure the total magnitude of a with the WFPC2 due to the extended wings of the PSF, scattered light, and the small pixel size -- one often needs to use an aperture far larger than is practical. A more accurate method is to measure the light within a smaller aperture and then apply an offset to determine the total magnitude. A standard aperture radius of 0."5 has been adopted by Holtzman et al. (1 995b)-note that the first Holtzman paper used a value of 1."0--and by the WFPC2 group at STScI. Even smaller apertures are preferable for faint point sources. An aperture radius of 2-4 pixels for stars, with a background annulus around 10-15 pixels, has been found to be near optimal for simple aperture photometry of faint point sources by several groups. Aperture corrections are provided in Table 2 of Holtzman et al. (1995a), but it is possible that you may prefer to use a well-exposed isolated star in your images (if one exists). Color Terms In some cases it may be necessary to transform from the WFPC2 filter set to more conventional filters (e.g., Johnson UBV or Cousins RI) in order to make comparisons with other datasets. The accuracy of these transformations is determined by how closely the WFPC2 filter matches the conventional filter and by how closely the spectral type (e.g., color, metallicity, surface gravity) of the object matches the spectral type of the calibration observations. Accuracies of 1-2 percent are typical for many cases, but much larger uncertainties are possible for certain filters (e.g., F336W with a red leak, see below), and for certain spectral types (e.g., very blue stars). Transformations can be determined by using synphot, or by using the transformation coefficients in Holtzman (et al.). Digitization Noise The minimum gain of the WFPC2 CCDs, 7 e-/ADU, is larger than the read noise of the chip. As a result, digitization can be a substantial source of noise in WFPC2 images. This effect is particularly pernicious when attempting to determine sky values, for the measured values tend to cluster about a few integral values (flattening causes the values to be slightly non-integral). As a result, using a median filter to remove objects that fall within the background annulus in crowded fields, can cause a substantial systematic error, whose magnitude will depend on the annulus being measured. It is generally safer to use the mean, though care must then be taken to remove objects in the background annulus. A more subtle effect is that some statistics programs assume Gaussian noise characteristics when computing properties such as the median and mode. Quantized noise can have surprising effects on these programs. An Instrument Science Report further discussing digitization noise should be released by early in 1996. Red Leaks Several of the UV filters have substantial red leaks that may affect the photometry. For example, the U filter (F336W) has a transmission at 7500 A that is only about a factor of 100 less than at the peak transmission at about 3500 A. The increased sensitivity of the CCDs in the red, coupled with the fact that most sources are brighter in the red, makes this an important problem in many cases. The synphot tasks can be used to estimate this effect for any given source spectrum. Charge Traps There are about 30 macroscopic charge transfer traps, where as little as 20 percent of the electrons are transferred during each time step during the readout. These defects result in bad pixels, or in the worst cases, bad columns and should not be confused with microscopic charge traps which are believed to be the cause of the CTE problem. The traps result in dark tails just above the bad pixel, and bright tails for objects farther above the bad pixel that get clocked out through the defect during the readout. The tails can cause large errors in photometric and astrometric measurements. In a random field, about 1 out of 100 stars are likely to be affected. Using a program which interpolates over bad pixels or columns (e.g., wfixup or fixpix) to make a cosmetically better image can result in very large (e.g., tenths of magnitude) errors in the photometry in these rare cases. See also "Charge Traps" on page 519. Exposure Times: Serial Clocks The serial clocks option (i.e., the optional parameter CLOCKS = YES in the proposal instructions) is occasionally useful when an extremely bright star is in the field of view, in order to minimize the effects of bleeding. However, when using this option, the shutter open time can have errors of up to 0.25 second. The error in the exposure time occurs as a result of the manner in which the shutters are opened when CLOCKS=YES is specified. You can adjust for this error, however, by examining your header. If the keyword SERIALS = ON in your image header, then the serial clocks were employed. To correct for the exposure time error examine the SHUTTER keyword. If the value of this keyword is "A", then the true exposure time is 0.125 second less than that given in the header. If instead the value is "B", then the true exposure time is 0.25 second less than the header value. Users should also note that non-integral second exposure times cannot performed with the serial clocks on. Therefore, if a non-integral exposure time is specified in the proposal it will be rounded to the nearest second. The header keywords will properly reflect the true exposure duration. An Example of Photometry with WFPC2 This example shows the steps involved in measuring the magnitude of the star #1461 (Harris, 1993) in the Cousins I passband. The image used for this example can be obtained from the HST Archive, or from the WWW at: http://www.stsci.edu/ftp/instrument_news/WFPC2/Wfpc2_phot This WWW directory contains the materials for Instrument Science Report WFPC2 95-04 A Demonstration Analysis Script for Performing Aperture Photometry. Table 41.5 shows the results from an analysis script similar to ISR WFPC2 95-04, but including some of the corrections discussed above. Images: u2g40o09t.c0h[l] and u2g40o0at.c0h[l] Position: (315.37,191.16) Filter: F814W Exposure Time: 14 seconds Date of observation: MJD - 49763.4 Table 41.5: Magnitude of Star #1461 in Omega Cen ------------------------------------------------------------------------------ Value Description 2113.49 counts Raw counts in 0.5" radius aperture (I I pixels for PC) -13.49 = 2100.00 Background subtraction (0.03544 counts x 380.522 pix^2) x 0.9915 = 2082.15 Correction for geometric distortion. Not needed if doing counts surface photometry. => 15.512 mag Raw magnitude (=-2.5 x log10(2082.15 / 14 see) + 20.943) NOTE:-2.5 x log10(1.987) has been added to the zeropoint from Table 41.1 (i.e. 21.688), since these calibrations were taken using the gain=14 setup. Most science observations use gain=7. -0. IO = 15.412 mag Aperture correction estimated from Holtzman (1995a). -0.016 = 15.396 CTE correction (-O.04 x 315 / 800; assuming a 0.04 mag linear ramp and y position of 315) -0.000 => MF814W = Contamination correction (0.000 x [49763.4 - 49760.1]) 15.396 -0.013 => m, = Transformation to Cousins I passband 15.383 mag ------------------------------------------------------------------------------ Further WFPC2 Reduction Issues In this section we examine several important features of WFPC2 data that may need to be taken into account in order to obtain the best results from your data. Cosmic Rays WFPC2 images typically contain a large number of cosmic ray events, which are caused by the interaction of galactic cosmic rays and protons from the earth's radiation belt with the CCD. Hits occur at an average rate of about 1.8 events/CCD/s, with an overall variation in rate of 60 percent (peak-to-peak) depending upon geomagnetic latitude and position with respect to the South Atlantic Anomaly. Most cosmic ray events deposit a significant amount of charge in several pixels; the average number of pixels affected is 6, with a peak signal of 200 e-/pixel. More than 10,000 pixels per CCD will be affected by cosmic rays in a typical long exposure (1,500 seconds). Figure 41.1 shows part of an 800 second exposure on a WF chip (about a 200 pixel region), in which pixels affected by cosmic rays are shown in black and unaffected pixels are shown in white. A typical full orbit exposure (2000 s) would have about 2.5 times as many pixels corrupted by cosmic rays. Figure 41.1: WF Exposure Showing Pixels Affected by Cosmic Rays As a result of the undersampling of the WFPC2 PSF by the WF and PC pixels, it is very difficult to differentiate stars from cosmic rays using a single exposure. At present, the most reliable methods for removing cosmic rays require multiple exposures differing only by INTEGRAL pixel displacements. STSDAS tasks such as crrej and gcombine can identify and correct pixels affected by cosmic rays in multiple images (if the images have integral shifts, these will first have to be removed by the user using a task such as imshift). We recommend that when using these tasks, users allow for differences between images that may be due to small pointing shifts between images. These differences are particularly noticeable in PC images, where pointing offsets of only 10 mas can cause differences between images of tens of percent near at the edges of stellar PSFS. If such differences are detected, the user will want to allow for a multiplicative noise term in the noise parameters of the STSDAS task. Detailed explanations of crrej and gcombine can be obtained by using the help feature of STSDAS. Because sub-pixel dithering strategies are becoming more common, tasks that can remove cosmic rays from exposures shifted by non-integral pixel offsets are now being investigated, but their eventual reliability and date of availability are not yet known. At the moment we therefore recommend that multiple exposures be taken at a single pointing or at an integral pixel shift from the original pointing in any dithering strategy. A substantial improvement to the capabilities of the standard WFPC2 cosmic ray removal task, crrej, is presently underway. The new version will allow exposures of varying exposure length, permit the creation of individual mask files for the input images, and have separate multiplicative error parameters for the PC and WF chips. We expect this version of crrej to be available in the autumn of 1995. When it is available, further information will be posted on the WFPC WWW pages. Charge Traps There are about 30 pixels in WFPC2 which do not efficiently transfer charge during readout; these are often quite noticeable. Typically, charge is delayed into successive pixels, producing a streak above the defective pixel. In the worst cases, the entire column above the pixel can be rendered useless. On blank sky these traps will tend to produce a dark streak. However, when a bright object, or cosmic ray, is read through them a bright streak will follow the object. Figure 41.2 shows examples of both of these effects. (Note that these "macroscopic" charge traps are different form the much smaller traps believed to be responsible for the charge transfer effect discussed in the chapter on photometry.) The images in Figure 41.2 show streaks in the background sky (a) and stellar images (b) produced by charge traps in the WFPC2. Individual traps have been catalogued and their identifying numbers are shown. Figure 41.2: Streaks in a) Background Sky, and b) Stars Bright tails have been measured on images taken both before and after the April 23, 1994 cool down. The behavior of the traps has been quite constant with time, and fortunately there is no evidence for the formation of new traps since the ground system testing in May 1993. The charge delay in each of the traps is well characterized by a simple exponential decay which varies in magnitude and scale from trap to trap. The positions of the traps, as well as those of pixels immediately above the traps are marked in the c 1 h data quality files with the value of 2, indicating a chip defect. These pixels will be obviously defective even in images of sources of uniform surface brightness. However, after August 1995 the entire column above traps will be marked with the value of 256, which indicates a "Questionable Pixel." A strongly modulated object (such as a star) will leave a trail should it fall on any of these pixels. In cases where a bright streak is produced by a cosmic ray, standard cosmic ray removal techniques will typically remove the streak along with the cosmic ray. However, in cases where an object of interest has been affected the user must be more careful. While standard techniques such as wfixup will interpolate across affected pixels and produce an acceptable cosmetic result, interpolation can bias both photometry and astrometry. In cases where accurate reconstruction of the true image is important, modelling of the charge transfer is required. For further information on charge traps, including the measured parameters of the larger traps, users should consult WFPC2 Instrument Science Report 95-02, available on the or from help@stsci.edu. Dithering The pixels of the PC undersample the point spread function (PSF) of the HST by a factor of about two, and the pixels of the WF are a factor of two coarser yet. Thus WFPC2 does not recover a substantial fraction of the spatial information that exists at the focal plane of the instrument. However, this information is not completely lost. Some of it can be recovered by dithering or sub-stepping the position of the chips by non-integral pixel amounts. The recovery of high frequency spatial information is fundamentally limited by the pixel response function (PRF). The PRF of an ideal CCD with square pixels is simply a square boxcar function the size of the pixel. In practice, the PRF is a function not only of the physical size of the pixels, but also the degree to which photons and electrons are scattered into adjacent pixels, as well as smearing introduced by telescopic position wandering. The image recorded by the CCD is the "true" image (that which would be captured by an ideal detector at the focal plane) convolved with this PRF. Thus, at best, the image will be no sharper than that allowed by an ideal square pixel. In the case of WFPC2, in which at least 20 percent of the light falling on a given pixel is detected in adjacent pixels, the image is yet less sharp. The PRF of an ideal square pixel, that is a boxcar function, severely suppresses power on scales comparable to the size of the pixel. In particular, the power at a spatial frequency v = 1/lambda is suppressed by: sinc_l(v) = sin(pie vl)/(pie vl) where 1 is the length of the side of a pixel. Thus spatial frequencies with a wavelength equal to 1 are completely suppressed (as sinc_l,(1/l) = 0), and the power in frequencies above v = 1/l is dramatically reduced. In practice, the combined high-frequency suppression by the PRF and the camera optics limits nearly all the power in a WFPC2 image (in both the PC and WF chips) to frequencies less than 1/l. It is a well known theorem of information theory that any band-limited signal with maximum frequency v = 1/lambda can be fully reproduced from samples taken at intervals of lambda/2. Thus the sampling interval required to capture nearly all of the information passed by square pixels is 1/2 the size of a pixel. This corresponds to dithering the CCD from its starting position of (0,O) to three other positions, (0, 1/2 l), (1/2 l, 0) and (1/2 l, 1/2 l); however, in practice, much of the information can be regained by a single dither to (1/2 l, 1/2 l). Reconstruction and Deconvolution The process of retrieving high-spatial resolution information from dithered images can be thought of as having two-stages. The first, reconstruction, removes the effect of sampling and restores the image to that produced by the convolution of the PSF and PRF of the telescope and detector. The more demanding stage, deconvolution (sometimes called restoration), attempts to remove much of the blurring produced by the optics and detector. In effect, deconvolution boosts the relative strength of the high-frequency components of the Fourier spectrum to undo the suppression produced by the PSF and PRF. If your observations were taken with either of the two dither patterns discussed above, and if the positioning of the telescope was accurate to about a tenth of a pixel (this is usually but not always the case), then you can reconstruct the image merely by interlacing the pixels of the offset images. In the case of a twofold dither-that is images offset by a vector (n + 1/2, n + 1/2) pixels, where n is an integer-the interlaced images can be put on a square grid rotated 45 degrees from the original orientation of the CCD (see Figure 41.3). In the case of a fourfold dither, the images are interlaced on a grid twice as fine as the original CCD (see Figure 41.3). At present there is no STSDAS task to do the interlacing, though we expect one to be available by the spring of 1996. This direct method of reconstruction has the valuable property of entirely preserving the noise structure of the component images. Figure 41.3: Interlacing Pixels of Offset Images As part of the Hubble Deep Field Project, an STSDAS task is being developed to linearly reconstruct multiple offset images. It will use a method of reconstruction which can be thought of as shifting-and-adding with a variable pixel size. For poorly sampled data, the shifted pixels retain the initial pixel size--the final image combines the shifts correctly, but the gain in resolution is minimal. For a well sampled field, such as that of the Hubble Deep Field, the size of the shifted images can be made quite small, and the image combination becomes equivalent to interlacing. The task will also correct for the effects of the geometric distortion of the WFPC2; correction of geometric distortion is important if shifts between dithered images are of order ten pixels or more. This software should be available for use by the general HST community sometime early in 1996. Users interested in this software should watch for announcements in the WFPC2 electronic newsletter and on the WFPC2 WWW pages. Reconstruction can also be accomplished by a procedure called Projection onto Convex Sets, or POCS, which uses the fact that each of the dithered images can, in image space, be considered a linear projection of the unsampled image. Experiments using this method on HST data have shown it, like the variable pixel method described above, to work well in removing the effects of sampling. Unfortunately, this software is not yet in a user-friendly condition (and requires IDL). A more robust IDL version may be completed in the relatively near future, and in the longer term, it is possible that an STSDAS POCS-based routine will be developed. Although reconstruction largely removes the effects of sampling on the image, it does not restore the information lost to the smearing of the PSF and PRF. Deconvolution of the images, however, does hold out the possibility of recapturing much of this information. Figure 41.4, supplied by Richard Hook of the ST-ECF, shows the result of applying the Richardson-Lucy deconvolution scheme to HST data. The upper-left image shows one of four input images. The upper-right image shows a deconvolution of all of the data, and the lower two images show deconvolutions of independent subsets of the data. A dramatic gain in resolution is evident. Many HST users will be familiar with the Richardson-Lucy (RL) deconvolution scheme, which proved quite useful in analyzing WF/PC-1 images. A version of the RL deconvolution scheme capable of handling dithered WFPC2 data is already available to STSDAS users. It is the task acoadd in the package stsdas.contrib. In order to use acoadd, users will need to supply the program both with a PSF (which in practice should be the convolution of the PRF with the optical PSF) and with the offsets in position between the various images. Obtaining a position offset between two images is best obtained by cross-correlating the two images using the task stsdas.analysis.crosscorr. The peak in the cross-correlated image corresponds to the offset between the two images. The peak can be accurately determined using, for instance, imexamine. Even relatively short exposures (a few hundred seconds) on a high-latitude field (such as that chosen for the Hubble Deep Field) allow one to obtain offsets accurate to a small fraction of a pixel. Users will obtain the best results from cross-correlation if the images first have cosmic rays removed (cosmic rays are uncorrelated, and therefore do not bias the result, but can substantially increase the noise in the cross correlation). Users should also taper the images, using the taperedge task in the STSDAS fourier package, to avoid edge effects. In low signal-to-noise cases where the offset is only a few pixels, users may also find it helpful to remove hot pixels to avoid a bias toward zero offset. Once the offsets are determined, the untapered images should be used in acoadd. In principle image deconvolution requires an accurate knowledge of both the instrument PSF and PRF. At present, our best models of the WFPC2 PSF come from the publicly available Tiny Tim software. However, it is known that this software can produce PSFs which differ noticeably from those observed in images and our knowledge of the exact shape of the WFPC2 PRF is presently quite limited. Nonetheless, tests done on WFPC2 images suggest that RL deconvolution can give the WFPC2 user a substantial gain in resolution even in the presence of typical PSF and PRF errors. The greatest impediment to the regular use of RL is due to variation in the shape of the PSF over the field of view. At the time of writing, no software is available which can handle multiple input images and a varying PSF. As a result, RL can only be applied to limited regions of a chip at a time. Users interested in more information on dithering, reconstruction and deconvolution should consult the February and September 1995 issues of the ST-ECF newsletter, where these issues are discussed in detail. More information on this subject can be expected to appear in the WFPC2 electronic newsletters and on the WFPC2 pages. Figure 41.4: Richardson-Lucy Deconvolution of HST Data WFPC2 Image Anomalies In this section we present a number of unusual occurrence, which can, on occasion, affect your data. Bias Jumps Two bias levels, one for even columns and one for odd columns, are derived from the engineering data. However, on rare occasion (perhaps one out of several hundred images) the value of the bias will change during readout, causing a distinct jump in the zero level of the image. Figure 41.5 shows a very unusual event where the bias has jumped in two chips during the same image. There is no standard procedure to remove this defect, but it can be corrected by measuring the jump in the .x0h (bias) file or directly in the image if the image is clean enough. Standard IRAF procedures such as imexamine or imstat are sufficient to obtain a good estimate of the offset. The offset can then be removed, for instance, by copying out the affected chip to another image, using the command: imcalc image_in image_out "if (y.1t.YJUMP) then im1 else (im1 - BJUMP)" where YJUMP is the line at which the jump occurs, and BJUMP is its magnitude. image-out can then be copied back into the appropriate WFPC image group. Figure 41.5: Bias Jump in Two Chips Residual Images Observations of very bright sources can leave behind a residual image. This residual is caused by two distinct effects. In the first effect, in heavily saturated pixels, charge is forced into deeper layers of the CCD which are not normally cleared by readout. Over time, the charge slowly leaks back into the imaging layers and appears in subsequent images. The time scale for leakage back into the imaging region depends on the amount of over-exposure. Strongly saturated images can require several hours to completely clear. The second effect is caused by charge transfer inefficiencies. At all exposure levels some charge becomes temporarily bound to impurities in the silicon of the CCD. The effect is most noticeable in images with high exposure levels since, it is thought, as the wells are filled electrons become exposed to more impurities. This effect leaves behind charge both in bright regions of the image as well as that part of the chip through which the bright objects were read out. Figure 41.6 shows a saturated star on PC1 and the residual image seen in an 1800 second dark calibration frame which was started six minutes later. Note that the residual image is bright not only where the PC image was overexposed (effect 1), but also has a wide swath below the star due to effect 2. Figure 41.6: Saturated Star and Residual Image Ghosts Ghost images may occur on images of bright objects due to internal reflections in the WFPC2 camera. The most common ghosts are caused by internal reflections in the MgF2 field flatteners lenses. In these ghosts, the line connecting the ghost and the primary image passes through the optical center of the chip. The ghost always lies further away from the center than the primary image. Figure 41.7 gives an example of one of these ghosts. Figure 41.7: Field-Flattener Ghost in WF2-Image Shows Entire CCD Ghosts may also occur due to internal reflections on the internal surfaces of a filter. The position of these ghosts will vary from filter to filter and chip to chip. For any given filter and chip combination, the direction of the offset of the ghost from the primary image will be constant, although the size of the offset may vary as a function of the position of the primary image. Filter ghosts can be easily recognized by their comatic (fan-shaped) structure. Particularly bright objects may produce multiple ghosts due to repeated internal reflections. Figure 41.8 shows an example of a filter ghost. Figure 41.8: Detail of Filter Ghost on WF4 Earth Reflections Light from the bright sun-lit earth is on rare occasion reflected off the Optical Telescope Assembly (OTA) baffles and secondary support and into the WFPC2. These reflections can occur when the bright earth is less than ~40 degrees from the OTA axis. (The default bright earth limb avoidance is 20 degrees. Science observations are not scheduled at smaller limb angles to the sunlit earth). The light raises the overall background level of the field; however the WFPC2 camera mirror supports can vignette the scattered light producing either X-shaped or diagonal depressions in the level of the background. Figure 41.9 shows a typical example of the pattern formed by the scattered light. The scattered light in this image has a level of about 100 electrons. The darkest portion of the X is about 40 electrons below the average background level. Figure 41.9: Scattered Light Pattern PC1 Stray Light The WFPC2 was originally intended to contain two separate pyramids-one for four PC cameras and the other for four WF cameras. Budget reductions caused the PC pyramid to be abandoned and the first WF camera to be replaced by a PC camera. However, the pyramid mirror corresponding to the PC camera was not reduced in size. As a result, baffling for the PC chip is not optimal, and a bright star falling on the pyramid outside of the PC field of view can produce an obvious artifact typically shaped like a broad, segmented arc. A star bright enough to produce a total count rate of 1 DN/s on the chip, will produce a ghost with a count rate of about 1 X 10^-7 DN/pixel/s over the affected region. When scheduling observations, users should avoid placing stars brighter than m~14 in the L-shaped region surrounding the PC. Other Anomalies Other image anomalies, such as bright streaks from other spacecraft, scattered light from bright stars near the field of view, and missing image sections due to dropped data occur on rare occasion. If you suspect that you are the unfortunate victim of one of these rare accidents, you may wish to consult the WFPC2 Instrument Science Report "A Field Guide to WFPC2 Image Anomalies" which can be obtained by sending e-mail to help@stsci. edu or directly through the world wide web from the WFPC2 web page. References * Harris, H.C., Hunter, D.A., Baum, W.A., and Jones, J.H., 1993, AJ, 105, 1196. * Holtzman, J.A. et al. 1995a, PASP, 107, 156. * Holtzman, J.A. et al. 1995b, PASP, 107, 1065. ------------------------------------------------------------------------------ PART 9: APPENDIXES This part contains only one appendix: a glossary of terms and abbreviations used in this document. The forms that were included in the previous version of this handbook are not included, however, they are available through the Institute's world wide web server, STEIS. Glossary The following terms and acronyms are used in this manual. AID: Analog to digital converter. AEDP: Astrometry and Engineering Data Processing. CADC: Canadian Astronomical Data Center. CCD: Charge-coupled device. Solid-state, light detecting device. CDBS: Calibration Data Base. System for maintaining reference files and tables used to calibrate HST observational datasets. C&DH: Control and Data Handling. CL: Command language. The IRAF system-level prompt. COSTAR: Corrective Optics Space Telescope Axial Replacement. CTE: Charge transfer efficiency. CVC: Current to voltage converter. Used with HSP. DADS: Data Archive and Distribution System. DMF: Data Management Facility. Used for archiving HST data. DN: Digital number. DQE: Detector quantum efficiency. DQF: Data quality file. EBS: Electron Bombarded Silicon. EED: Extracted engineering data. ESA: European Space Agency. FES: Fine error signal (used with FGS). FFT: Fast Fourier transform. FGE: Fine Guidance Electronics. (Used with FGS). FGS: Fine Guidance Sensors. FITS: Flexible Image Transport System. A generic IEEE- and NASA-defined standard used for storing image data. FOC: Faint Object Camera. FOS: Faint Object Spectrograph. FOV: Field of view. FTP: File Transfer Protocol. Basic tool used to retrieve files from a remote system. Ask your system manager for information about using FTP. FWHM: Full width at half maximum. GCI: Geocentric inertial. GEIS: Generic Edited Information Set. The multigroup format used by STSDAS for storing HST image data. GHRS: Goddard High-Resolution Spectrograph. GIF: Graphic Interchange Format. Data format developed by CompuServe for storing and transporting image data. Supported by Mosaic and xv for use over Internet. GIM: Geomagnetically-induced motion problem. Formerly a correction in the FOS calibration pipeline, now applied on the spacecraft. GO: General Observer. Gopher: Hypertext-oriented software developed at University of Minnesota for retrieving information through the Internet. GPB: Group parameter block. GSC: Guide Star Catalog. GTO: Guaranteed Time Observer. HSP: High-Speed Photometer. HST: Hubble Space Telescope. HTML: Hypertext Markup Language. Hz: Hertz. Cycle per second. ICD: Interface control document. Defines data structures used between software or systems to ensure compatibility. ICF: Intrinsic correlation function. IDT: Investigation Development Team. IFT: Inverse Fourier transform. IGI: Interactive Graphics Interpreter. Graphics program in STSDAS. IPPPSSOOT: HST file naming convention to uniquely identify files (described on page 16). IR: Infrared. IRAF: Image Reduction and Analysis System. The system on which STSDAS is built. IUE: International Ultraviolet Explorer. JPL: Jet Propulsion Laboratory, located at California Institute of Technology in Pasadena. Home of WF/PC team. K: Degree Kelvin. LOS: Line of sight. LSA: Large science aperture. One of two apertures on GHRS. LVPS: Low-voltage power supply. mas: Milliarcsecond. MEM: Maximum Entropy Method. Algorithm for restoring images. MJD: Modified Julian date. MOSS: Moving Object Support System. ND: Neutral density. NOAO: National Optical Astronomy Observatories. OCX: Observer comments file from OSS. Contains updated mission information obtained when the observation is taken. OFAD: Optical field angle distortion. OMS: Observatory Monitoring System. OPUS: OSS and PODPS Unified Systems. OSS: Observation Support System. OTA:Optical telescope assembly. OV: Orbital verification. Process of checking out equipment on HST. PC: Planetary Camera; part of WF/PC. PCS: Pointing Control System. PDA: Photon Detector Assembly (in FOC). PDB: Project Data Base. PDQ: PODPS data Quality file. Contains predicted and actual observation parameters. PI: Principal investigator. pipe: To use the output from one task or program as the input to another task or program. The output of a STSDAS or IRAF task can be piped to another task. PMT: Photomultiplier tube. PODPS: Post-Observation Data Processing System. PSF: Point spread function. Needed to restore HST images. PA: Right ascension. R-L: Richardson-Lucy. Algorithm for restoring images. rms: Root mean square. RSDP: Routine Science Data Processing. The basic calibration (pipeline) system used for processing all HST observation datasets. RTB: Return-to-brightest. Target acquisition mode for GHRS. SAA: South Atlantic Anomaly. SDF: Science Data Formatter. SHL: Science header line. SHP: Standard header packet. File containing spacecraft information from time of observation. SIN: Signal-to-noise ratio. SOGS: Science Operations Ground System. SSA: Small science aperture. One of two apertures on GHRS; the other is LSA. ST-ECF: Space Telescope European Coordinating Facility. STARICAT: System used for retrieving archived HST data. To be replaced by StarView in 1994. STEIS: Space Telescope Electronic Information System. The anonymous FTP host from which information, software, documentation, and other resources pertaining to the HST can be obtained. STL: Science trailer line. STSCI: Space Telescope Science Institute. STSDAS: Space Telescope Science Data Analysis System. The complete suite of data analysis and calibration routines used to process HST data. SV: Science verification. Process of taking observations that can be used for HST instrument calibration. TEC: Thermal electric cooler. Part of WF/PC. TF: Transfer function. TFMRP: Transfer Function Mode Reduction Package. Used with respect to FGS. TIFF: Tagged Image File Format. Method of storing images, usually from scanner. TIM: Telescope Image Modeling. Software used to generate PSF. UDL: Unique data log. File containing instrument command settings. URL: Uniform resource locator. Address for WWW. UT: Universal time. UV: Ultraviolet. VPU: Video processing unit. WAIS: Wide-Area Information Server. Method of locating information on a network using indexes to files. WCS: World Coordinate System. WFC: Wide Field Camera. One of the two cameras in WF/PC. WFIPC: Wide Field/Planetary Camera. WFPC2: Wide Field Planetary Camera-2. Replacement for WF/PC installed during first servicing mission of December 1993. world coordinates (WC): The coordinate system naturally applying to the data, pixels or wavelength, for example. WWW: World Wide Web. Hypertext-oriented method for finding and retrieving information over the Internet. ZD: Zenith distance.