Altering and saving a new fits file changes its byteorder

I’m having trouble trying to save a new fits file after altering it, and aplying it to sunpy’s map. Here’s a short version of my problem:

import numpy as np
import astropy.io.fits as fits

data=fits.getdata(filepath)

#This is where I make a change on the FITS file
data_copy=data.copy()
data_copy[1825:1860,2090:2125] = 10940.843

#Saving the new version
header_data = fits.getheader(filepath)
fits.writeto(newfits, data_copy, header=header_data)

The problem here, is that the new FITS file is saved, but it has different byteorder (big), and in my next step, I have to use sunpy’s map function and it delivers an error.

import sunpy.map

map_seq = sunpy.map.Map(newfits)

MapMetaValidationError: Image coordinate units for axis 1 not present in metadata.
Image coordinate units for axis 2 not present in metadata.

I don’t know what is changing my original FITS file for it not to be easily readable by the map function (the original fits file works fine). What can I do?

Not sure to understand your point about byteorder. It seems the error comes from some header cards that have not being copied, which could come from some confusion between primary and data header maybe, but hard to say without more detail on the file.

Does the original file open with a sunpy map?

When checking the old and new FITS files with numpy.info(), there is a line that tells the byteorder. The old file is set as ‘little’ and the new one ‘big’. I too think the problem is in the header, but primarily I thought I was keeping it unaltered with fits.writeto().

The file is an HMI/SDO Intensity Continuum fits. What more detail can i provide to help? Thank you, in advance!

Yes! sunpy map works fine with the original file.

Can we get a copy of the old and new file?

Yes, of course. Here’s a link to a drive folder with both: FITS files - Google Drive

As you’ll see, the altered file grows in size, and I suspect the header also changes, even though I tried copying if from the original before using fits.writeto()

So there are several things going on here:

  1. The original file is rice compressed, which is why the first file is much smaller.
  2. This means that the first header in the original file is just the header to enable the compression.
  3. You actually want to use the second header in the original file to create the altered FITS file.
  4. By default I believe that astropy.io.fits does no compression, which is why the files are different in size.
In [4]: fits.getheader("./hmi_ic_45s_2015_08_05_00_00_45_tai_continuum_og.fits")
Out[4]:
SIMPLE  =                    T / file does conform to FITS standard
BITPIX  =                   16 / number of bits per data pixel
NAXIS   =                    0 / number of data axes
EXTEND  =                    T / FITS dataset may contain extensions
COMMENT   FITS (Flexible Image Transport System) format is defined in 'Astronomy
COMMENT   and Astrophysics', volume 376, page 359; bibcode: 2001A&A...376..359H

vs

In [5]: fits.getheader("./hmi_ic_45s_2015_08_05_00_00_45_tai_continuum_og.fits",ext=1)
Out[5]: WARNING: VerifyWarning: Verification reported errors: [astropy.io.fits.verify]
WARNING: VerifyWarning: Card 'CRDER1' is not FITS standard (invalid value string: 'nan').  Fixed 'CRDER1' card to meet the FITS standard. [astropy.io.fits.verify]
WARNING: VerifyWarning: Note: astropy.io.fits uses zero-based indexing.
 [astropy.io.fits.verify]
WARNING: VerifyWarning: Card 'CRDER2' is not FITS standard (invalid value string: 'nan').  Fixed 'CRDER2' card to meet the FITS standard. [astropy.io.fits.verify]
WARNING: VerifyWarning: Card 'CSYSER1' is not FITS standard (invalid value string: 'nan').  Fixed 'CSYSER1' card to meet the FITS standard. [astropy.io.fits.verify]
WARNING: VerifyWarning: Card 'CSYSER2' is not FITS standard (invalid value string: 'nan').  Fixed 'CSYSER2' card to meet the FITS standard. [astropy.io.fits.verify]

XTENSION= 'IMAGE   '           / binary table extension
BITPIX  =                   16 / data type of original image
NAXIS   =                    2 / dimension of original image
NAXIS1  =                 4096 / length of original image axis
NAXIS2  =                 4096 / length of original image axis
PCOUNT  =                    0 / size of special data area
GCOUNT  =                    1 / one data group (required keyword)
DATE    = '2015-08-09T15:36:51'
DATE-OBS= '2015-08-04T23:59:53.70'
TELESCOP= 'SDO/HMI'
INSTRUME= 'HMI_FRONT2'
WAVELNTH= 6173.0
CAMERA  =                    2
BUNIT   = 'DN/s'
ORIGIN  = 'SDO/JSOC-SDP'
CONTENT = 'CONTINUUM INTENSITY'
QUALITY =                    0
QUALLEV1=                    0
HISTORY Polynomial Coefficients used for Doppler velocity correction: 1.216183e+
HISTORY 02 5.820236e-03 -4.382100e-06 -9.209832e-10
COMMENT De-rotation: ON; Un-distortion: ON; Re-centering: ON; Re-sizing: OFF; co
COMMENT rrection for cosmic-ray hits; correction front/side intensity implemente
COMMENT d for mod L; RSUNerr=5.0 pixels; dpath=/home/jsoc/cvs/Development/JSOC/p
COMMENT roj/lev1.5_hmi/apps/; linearity=1 with coefficients updated on 2014/01/1
COMMENT 5; smooth=1; propagate eclipse bit from level 1; use of larger crop radi
COMMENT us look-up tables
BLD_VERS= '-808'
HCAMID  =                    3
SOURCE  = 'hmi.lev1[:#182564393,#182564369,#182564345,#182564417,#182564453,#18'
TOTVALS =             12140466
DATAVALS=             12140466
MISSVALS=                    0
SATVALS =                13643
DATAMIN2= 142.447220
DATAMAX2= 69236.625000
DATAMED2= 49526.226562
DATAMEA2= 44559.746094
DATARMS2= 15949.011719
DATASKE2= -1.759772
DATAKUR2= 2.398863
DATAMIN = 7878.100098
DATAMAX = 69236.625000
DATAMEDN= 50835.441406
DATAMEAN= 49363.160156
DATARMS = 7766.035645
DATASKEW= -0.632247
DATAKURT= -0.310031
CTYPE1  = 'HPLN-TAN'
CTYPE2  = 'HPLT-TAN'
CRPIX1  = 2039.155396
CRPIX2  = 2051.147217
CRVAL1  = 0.000000
CRVAL2  = 0.000000
CDELT1  = 0.504322
CDELT2  = 0.504322
CUNIT1  = 'arcsec'
CUNIT2  = 'arcsec'
CROTA2  = 179.929581
CRDER1  = 'nan     '
CRDER2  = 'nan     '
CSYSER1 = 'nan     '
CSYSER2 = 'nan     '
WCSNAME = 'Helioprojective-cartesian'
DSUN_OBS= 151752958120.29
DSUN_REF= 149597870691
RSUN_REF= 696000000
CRLN_OBS= 80.091087
CRLT_OBS= 6.036586
CAR_ROT =                 2166
OBS_VR  = 2334.453199
OBS_VW  = 29075.803660
OBS_VN  = 3739.675890
RSUN_OBS= 946.016541
T_OBS   = '2015.08.05_00:00:52_TAI'
T_REC   = '2015.08.05_00:00:45_TAI'
TRECEPOC= '1993.01.01_00:00:00_TAI'
TRECSTEP= 45.0
TRECUNIT= 'secs'
CADENCE = 45.0
DATASIGN=                    1
HFLID   =                 1021
HCFTID  =                   11
QLOOK   =                    0
CAL_FSN =             86968067
LUTQUERY= 'hmi.lookup_corrected_expanded[86968067][][3][37][50][0][80][6]'
TSEL    = 22.278479
TFRONT  = 32.357437
TINTNUM =                   72
SINTNUM =                   10
DISTCOEF= '/home/jsoc/cvs/Development/JSOC/proj/lev1.5_hmi/apps//../libs/lev15/'
ROTCOEF = '/home/jsoc/cvs/Development/JSOC/proj/lev1.5_hmi/apps//../libs/lev15/'
ODICOEFF=                    6
OROCOEFF=                    4
POLCALM =                    1
CODEVER0= '$Id: HMI_observables.c,v 1.48 2015/05/20 18:05:45 couvidat Exp $'
CODEVER1= '$Id: interpol_code.c,v 1.4 2015/01/31 18:01:02 kehcheng Exp $'
CODEVER2= '$Id: interpol_code.c,v 1.4 2015/01/31 18:01:02 kehcheng Exp $'
CODEVER3= '$Id: polcal.c,v 1.5 2013/12/22 22:54:08 couvidat Exp $'
CALVER64=                 8466
RECNUM  =              6360380
BLANK   =               -32768
BZERO   =               97304.
BSCALE  =                   3.

This is why sunpy.map.Map is complaining, there is no metadata, it was not passed to the new FITS file.

About the difference in byteorder, I do not know but I would not expect it to matter.

Thank you very much for the clarification!

So, in order to have sunpy.map.Map read the new FITS file, I should take the second header from the original, and copy it to the new, correct?

How can I do that? Should I use header_data = fits.getheader(filepath) as a list and collect the second header?

You should be able to pass “ext=2” as a keyword to select the header if I recall.

1 Like