[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[gnuastro-commits] master 536b056 110/113: Imported recent changes in ma
From: |
Mohammad Akhlaghi |
Subject: |
[gnuastro-commits] master 536b056 110/113: Imported recent changes in master, conflicts in book fixed |
Date: |
Fri, 16 Apr 2021 10:34:03 -0400 (EDT) |
branch: master
commit 536b056847e648cf9dbb67d01b135c86025c00a0
Merge: d0d8d20 df23eac
Author: Mohammad Akhlaghi <mohammad@akhlaghi.org>
Commit: Mohammad Akhlaghi <mohammad@akhlaghi.org>
Imported recent changes in master, conflicts in book fixed
Several conflicts came up because of the new one-sentence-per-line
convention in the book but were easy to fix.
---
NEWS | 69 +-
bin/TEMPLATE/Makefile.am | 6 +-
bin/TEMPLATE/ui.c | 1 +
bin/arithmetic/Makefile.am | 7 +-
bin/arithmetic/args.h | 13 +
bin/arithmetic/arithmetic.c | 3 +-
bin/arithmetic/main.h | 1 +
bin/arithmetic/ui.c | 1 +
bin/arithmetic/ui.h | 3 +-
bin/buildprog/Makefile.am | 6 +-
bin/buildprog/ui.c | 1 +
bin/convertt/Makefile.am | 6 +-
bin/convertt/color.c | 67 +-
bin/convertt/ui.c | 1 +
bin/convolve/Makefile.am | 6 +-
bin/cosmiccal/Makefile.am | 6 +-
bin/cosmiccal/args.h | 15 +
bin/cosmiccal/cosmiccal.c | 4 +
bin/cosmiccal/main.h | 1 +
bin/cosmiccal/ui.c | 61 +-
bin/cosmiccal/ui.h | 3 +-
bin/crop/Makefile.am | 6 +-
bin/crop/ui.c | 2 +
bin/fits/Makefile.am | 6 +-
bin/fits/ui.c | 1 +
bin/gnuastro.conf | 2 +-
bin/match/Makefile.am | 6 +-
bin/match/ui.c | 1 +
bin/mkcatalog/Makefile.am | 6 +-
bin/mknoise/Makefile.am | 6 +-
bin/mknoise/ui.c | 2 +
bin/mkprof/Makefile.am | 6 +-
bin/mkprof/ui.c | 1 +
bin/noisechisel/Makefile.am | 6 +-
bin/noisechisel/detection.c | 10 +-
bin/noisechisel/ui.c | 1 +
bin/segment/Makefile.am | 6 +-
bin/segment/ui.c | 13 +-
bin/statistics/Makefile.am | 6 +-
bin/table/Makefile.am | 6 +-
bin/table/args.h | 50 +-
bin/table/main.h | 26 +-
bin/table/table.c | 266 +-
bin/table/ui.c | 332 +-
bin/table/ui.h | 15 +-
bin/warp/Makefile.am | 6 +-
configure.ac | 149 +-
doc/announce-acknowledge.txt | 17 +-
doc/genauthors | 18 +-
doc/gnuastro.en.html | 8 +-
doc/gnuastro.fr.html | 8 +-
doc/gnuastro.texi | 22421 +++++++++++++++--------------------------
doc/release-checklist.txt | 16 +-
lib/label.c | 9 +-
lib/options.c | 40 +-
lib/statistics.c | 16 +-
56 files changed, 8747 insertions(+), 15024 deletions(-)
diff --git a/NEWS b/NEWS
index 9756ca1..14c5897 100644
--- a/NEWS
+++ b/NEWS
@@ -7,6 +7,45 @@ See the end of the file for license conditions.
** New features
+ Arithmetic:
+ --onedonstdout: when the output is one-dimensional, print the values on
+ the standard output, not into a file.
+
+ CosmicCalculator:
+ --lineatz: return the observed wavelength of a line if it was emitted at
+ the redshift given to CosmicCalculator. You can either use known line
+ names, or directly give a number as any emitted line's wavelength.
+
+ Table:
+ --equal: Output only rows that have a value equal to the given value in
+ the given column. For example `--equal=ID,2,4,5' will select only rows
+ that have a value of 2, 4 and 5 in the `ID' column.
+ --notequal: Output only rows that have a different value compared to the
+ values given to this option in the given column.
+
+** Removed features
+
+** Changed features
+
+** Bugs fixed
+ bug #56736: CosmicCalculator crash when a single value given to --obsline.
+ bug #56747: ConvertType's SLS colormap has black pixels which should be
orange.
+ bug #56754: Wrong sigma clipping output when many values are equal.
+
+
+
+
+
+* Noteworthy changes in release 0.10 (library 8.0.0) (2019-08-03) [stable]
+
+** New features
+
+ Installation:
+ - With the the following options at configure time, its possible to
+ build Gnuastro without the optional libraries (even if they are
+ present on the host system): `--without-libjpeg', `--without-libtiff',
+ `--without-libgit2'.
+
All programs:
- When an array is memory-mapped to non-volatile space (like the
HDD/SSD), a warning/message is printed that shows the file name and
@@ -113,25 +152,15 @@ See the end of the file for license conditions.
- gal_statistics_outlier_flat_cfp: Improved implementation with new API.
- New `quietmmap' argument added to the following functions (as the
argument following `minmapsize'). For more, see the description above
- of the new similarly named option to all programs.
- - gal_array_read
- - gal_array_read_to_type
- - gal_array_read_one_ch
- - gal_array_read_one_ch_to_type
- - gal_data_alloc
- - gal_data_initialize
- - gal_fits_img_read
- - gal_fits_img_read_to_type
- - gal_fits_img_read_kernel
- - gal_fits_tab_read
- - gal_jpeg_read
- - gal_label_indexs
- - gal_list_data_add_alloc
- - gal_match_coordinates
- - gal_pointer_allocate_mmap
- - gal_table_read
- - gal_tiff_read
- - gal_txt_image_read
+ of the new similarly named option to all programs: `gal_array_read'
+ `gal_array_read_to_type', `gal_array_read_one_ch',
+ `gal_array_read_one_ch_to_type', `gal_data_alloc',
+ `gal_data_initialize', `gal_fits_img_read',
+ `gal_fits_img_read_to_type', `gal_fits_img_read_kernel',
+ `gal_fits_tab_read', `gal_jpeg_read', `gal_label_indexs',
+ `gal_list_data_add_alloc', `gal_match_coordinates',
+ `gal_pointer_allocate_mmap', `gal_table_read', `gal_tiff_read' and
+ `gal_txt_image_read'
Book:
- The two larger tutorials ("General program usage tutorial", and
@@ -155,6 +184,8 @@ See the end of the file for license conditions.
bug #56635: Update tutorial 3 with bug-fixed NoiseChisel.
bug #56662: Converting -R to -Wl,-R causes a crash in configure on macOS.
bug #56671: Bad sorting with asttable if nan is present.
+ bug #56709: Segment crash when input has blanks, but labels don't.
+ bug #56710: NoiseChisel sometimes not including blank values in output.
diff --git a/bin/TEMPLATE/Makefile.am b/bin/TEMPLATE/Makefile.am
index bab4ad9..ee006da 100644
--- a/bin/TEMPLATE/Makefile.am
+++ b/bin/TEMPLATE/Makefile.am
@@ -23,6 +23,9 @@
AM_LDFLAGS = -L\$(top_builddir)/lib
AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
+if COND_NORPATH
+ MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
## Program definition (name, linking, sources and headers)
@@ -33,7 +36,8 @@ bin_PROGRAMS = astTEMPLATE
## don't keep external variables (needed in Argp) after the first link. So
## the `libgnu' (that is indirectly linked through `libgnuastro') can't see
## those variables. We thus need to explicitly link with `libgnu' first.
-astTEMPLATE_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astTEMPLATE_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+ $(MAYBE_NORPATH)
astTEMPLATE_SOURCES = main.c ui.c TEMPLATE.c
diff --git a/bin/TEMPLATE/ui.c b/bin/TEMPLATE/ui.c
index 7d888b7..717bf17 100644
--- a/bin/TEMPLATE/ui.c
+++ b/bin/TEMPLATE/ui.c
@@ -100,6 +100,7 @@ ui_initialize_options(struct TEMPLATEparams *p,
/* Set the necessary common parameters structure. */
+ cp->program_struct = p;
cp->poptions = program_options;
cp->program_name = PROGRAM_NAME;
cp->program_exec = PROGRAM_EXEC;
diff --git a/bin/arithmetic/Makefile.am b/bin/arithmetic/Makefile.am
index 6d562d8..22ccfd2 100644
--- a/bin/arithmetic/Makefile.am
+++ b/bin/arithmetic/Makefile.am
@@ -23,13 +23,18 @@
AM_LDFLAGS = -L\$(top_builddir)/lib
AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
+if COND_NORPATH
+ MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
+
## Program definition (name, linking, sources and headers)
bin_PROGRAMS = astarithmetic
## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astarithmetic_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astarithmetic_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la \
+ -lgnuastro $(MAYBE_NORPATH)
astarithmetic_SOURCES = main.c ui.c arithmetic.c operands.c
diff --git a/bin/arithmetic/args.h b/bin/arithmetic/args.h
index 987148d..788a818 100644
--- a/bin/arithmetic/args.h
+++ b/bin/arithmetic/args.h
@@ -85,6 +85,19 @@ struct argp_option program_options[] =
GAL_OPTIONS_NOT_MANDATORY,
GAL_OPTIONS_NOT_SET
},
+ {
+ "onedonstdout",
+ UI_KEY_ONEDONSTDOUT,
+ 0,
+ 0,
+ "Write 1D output on stdout, not in a table.",
+ GAL_OPTIONS_GROUP_OUTPUT,
+ &p->onedonstdout,
+ GAL_OPTIONS_NO_ARG_TYPE,
+ GAL_OPTIONS_RANGE_0_OR_1,
+ GAL_OPTIONS_NOT_MANDATORY,
+ GAL_OPTIONS_NOT_SET
+ },
{0}
};
diff --git a/bin/arithmetic/arithmetic.c b/bin/arithmetic/arithmetic.c
index 2a370c5..6935036 100644
--- a/bin/arithmetic/arithmetic.c
+++ b/bin/arithmetic/arithmetic.c
@@ -1247,7 +1247,8 @@ reversepolish(struct arithmeticparams *p)
will be freed while freeing `data'. */
data->wcs=p->refdata.wcs;
if(data->ndim==1 && p->onedasimage==0)
- gal_table_write(data, NULL, p->cp.tableformat, p->cp.output,
+ gal_table_write(data, NULL, p->cp.tableformat,
+ p->onedonstdout ? NULL : p->cp.output,
"ARITHMETIC", 0);
else
gal_fits_img_write(data, p->cp.output, NULL, PROGRAM_NAME);
diff --git a/bin/arithmetic/main.h b/bin/arithmetic/main.h
index 92c9e9f..af1b9a0 100644
--- a/bin/arithmetic/main.h
+++ b/bin/arithmetic/main.h
@@ -81,6 +81,7 @@ struct arithmeticparams
gal_data_t refdata; /* Container for information of the data. */
char *globalhdu; /* Single HDU for all inputs. */
uint8_t onedasimage; /* Write 1D outputs as an image not table. */
+ uint8_t onedonstdout; /* Write 1D outputs on stdout, not table. */
gal_data_t *named; /* List containing variables. */
size_t tokencounter; /* Counter for finding place in tokens. */
diff --git a/bin/arithmetic/ui.c b/bin/arithmetic/ui.c
index f1e39e5..95f23aa 100644
--- a/bin/arithmetic/ui.c
+++ b/bin/arithmetic/ui.c
@@ -120,6 +120,7 @@ ui_initialize_options(struct arithmeticparams *p,
struct gal_options_common_params *cp=&p->cp;
/* Set the necessary common parameters structure. */
+ cp->program_struct = p;
cp->poptions = program_options;
cp->program_name = PROGRAM_NAME;
cp->program_exec = PROGRAM_EXEC;
diff --git a/bin/arithmetic/ui.h b/bin/arithmetic/ui.h
index 37bf97d..6bbe04c 100644
--- a/bin/arithmetic/ui.h
+++ b/bin/arithmetic/ui.h
@@ -32,7 +32,7 @@ along with Gnuastro. If not, see
<http://www.gnu.org/licenses/>.
/* Available letters for short options:
- a b c d e f i j k l m n p r s t u v x y z
+ a b c d e f i j k l m n p r t u v x y z
A B C E G H J L Q R X Y
*/
enum option_keys_enum
@@ -40,6 +40,7 @@ enum option_keys_enum
/* With short-option version. */
UI_KEY_GLOBALHDU = 'g',
UI_KEY_ONEDASIMAGE = 'O',
+ UI_KEY_ONEDONSTDOUT = 's',
UI_KEY_WCSFILE = 'w',
UI_KEY_WCSHDU = 'W',
diff --git a/bin/buildprog/Makefile.am b/bin/buildprog/Makefile.am
index fa4f1c5..e8ea17d 100644
--- a/bin/buildprog/Makefile.am
+++ b/bin/buildprog/Makefile.am
@@ -29,13 +29,17 @@ AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib
-I\$(top_srcdir)/lib \
-DLIBDIR=\"$(libdir)\" -DINCLUDEDIR=\"$(includedir)\" \
-DEXEEXT=\"$(EXEEXT)\"
+if COND_NORPATH
+ MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
## Program definition (name, linking, sources and headers)
bin_PROGRAMS = astbuildprog
## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astbuildprog_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astbuildprog_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+ $(MAYBE_NORPATH)
# Basic program sources.
astbuildprog_SOURCES = main.c ui.c buildprog.c
diff --git a/bin/buildprog/ui.c b/bin/buildprog/ui.c
index 9d51f20..6db0d66 100644
--- a/bin/buildprog/ui.c
+++ b/bin/buildprog/ui.c
@@ -104,6 +104,7 @@ ui_initialize_options(struct buildprogparams *p,
/* Set the necessary common parameters structure. */
+ cp->program_struct = p;
cp->poptions = program_options;
cp->program_name = PROGRAM_NAME;
cp->program_exec = PROGRAM_EXEC;
diff --git a/bin/convertt/Makefile.am b/bin/convertt/Makefile.am
index c5f6e76..1935ff1 100644
--- a/bin/convertt/Makefile.am
+++ b/bin/convertt/Makefile.am
@@ -23,13 +23,17 @@
AM_LDFLAGS = -L\$(top_builddir)/lib
AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
+if COND_NORPATH
+ MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
## Program definition (name, linking, sources and headers)
bin_PROGRAMS = astconvertt
## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astconvertt_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astconvertt_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+ $(MAYBE_NORPATH)
astconvertt_SOURCES = main.c ui.c convertt.c color.c
diff --git a/bin/convertt/color.c b/bin/convertt/color.c
index bdd8daa..fc92a62 100644
--- a/bin/convertt/color.c
+++ b/bin/convertt/color.c
@@ -367,39 +367,40 @@ color_from_mono_sls(struct converttparams *p)
case 156: *r=1.000000; *g=0.394800; *b=0.000000; break;
case 157: *r=0.998342; *g=0.361900; *b=0.000000; break;
case 158: *r=0.996683; *g=0.329000; *b=0.000000; break;
- case 160: *r=0.995025; *g=0.296100; *b=0.000000; break;
- case 161: *r=0.993367; *g=0.263200; *b=0.000000; break;
- case 162: *r=0.991708; *g=0.230300; *b=0.000000; break;
- case 163: *r=0.990050; *g=0.197400; *b=0.000000; break;
- case 164: *r=0.988392; *g=0.164500; *b=0.000000; break;
- case 165: *r=0.986733; *g=0.131600; *b=0.000000; break;
- case 166: *r=0.985075; *g=0.098700; *b=0.000000; break;
- case 167: *r=0.983417; *g=0.065800; *b=0.000000; break;
- case 168: *r=0.981758; *g=0.032900; *b=0.000000; break;
- case 169: *r=0.980100; *g=0.000000; *b=0.000000; break;
- case 170: *r=0.955925; *g=0.000000; *b=0.000000; break;
- case 171: *r=0.931750; *g=0.000000; *b=0.000000; break;
- case 172: *r=0.907575; *g=0.000000; *b=0.000000; break;
- case 173: *r=0.883400; *g=0.000000; *b=0.000000; break;
- case 174: *r=0.859225; *g=0.000000; *b=0.000000; break;
- case 175: *r=0.835050; *g=0.000000; *b=0.000000; break;
- case 176: *r=0.810875; *g=0.000000; *b=0.000000; break;
- case 177: *r=0.786700; *g=0.000000; *b=0.000000; break;
- case 178: *r=0.762525; *g=0.000000; *b=0.000000; break;
- case 179: *r=0.738350; *g=0.000000; *b=0.000000; break;
- case 180: *r=0.714175; *g=0.000000; *b=0.000000; break;
- case 181: *r=0.690000; *g=0.000000; *b=0.000000; break;
- case 182: *r=0.715833; *g=0.083333; *b=0.083333; break;
- case 183: *r=0.741667; *g=0.166667; *b=0.166667; break;
- case 184: *r=0.767500; *g=0.250000; *b=0.250000; break;
- case 185: *r=0.793333; *g=0.333333; *b=0.333333; break;
- case 186: *r=0.819167; *g=0.416667; *b=0.416667; break;
- case 187: *r=0.845000; *g=0.500000; *b=0.500000; break;
- case 188: *r=0.870833; *g=0.583333; *b=0.583333; break;
- case 189: *r=0.896667; *g=0.666667; *b=0.666667; break;
- case 190: *r=0.922500; *g=0.750000; *b=0.750000; break;
- case 191: *r=0.948333; *g=0.833333; *b=0.833333; break;
- case 192: *r=0.974167; *g=0.916667; *b=0.916667; break;
+ case 159: *r=0.995025; *g=0.296100; *b=0.000000; break;
+ case 160: *r=0.993367; *g=0.263200; *b=0.000000; break;
+ case 161: *r=0.991708; *g=0.230300; *b=0.000000; break;
+ case 162: *r=0.990050; *g=0.197400; *b=0.000000; break;
+ case 163: *r=0.988392; *g=0.164500; *b=0.000000; break;
+ case 164: *r=0.986733; *g=0.131600; *b=0.000000; break;
+ case 165: *r=0.985075; *g=0.098700; *b=0.000000; break;
+ case 166: *r=0.983417; *g=0.065800; *b=0.000000; break;
+ case 167: *r=0.981758; *g=0.032900; *b=0.000000; break;
+ case 168: *r=0.980100; *g=0.000000; *b=0.000000; break;
+ case 169: *r=0.955925; *g=0.000000; *b=0.000000; break;
+ case 170: *r=0.931750; *g=0.000000; *b=0.000000; break;
+ case 171: *r=0.907575; *g=0.000000; *b=0.000000; break;
+ case 172: *r=0.883400; *g=0.000000; *b=0.000000; break;
+ case 173: *r=0.859225; *g=0.000000; *b=0.000000; break;
+ case 174: *r=0.835050; *g=0.000000; *b=0.000000; break;
+ case 175: *r=0.810875; *g=0.000000; *b=0.000000; break;
+ case 176: *r=0.786700; *g=0.000000; *b=0.000000; break;
+ case 177: *r=0.762525; *g=0.000000; *b=0.000000; break;
+ case 178: *r=0.738350; *g=0.000000; *b=0.000000; break;
+ case 179: *r=0.714175; *g=0.000000; *b=0.000000; break;
+ case 180: *r=0.690000; *g=0.000000; *b=0.000000; break;
+ case 181: *r=0.715833; *g=0.083333; *b=0.083333; break;
+ case 182: *r=0.741667; *g=0.166667; *b=0.166667; break;
+ case 183: *r=0.767500; *g=0.250000; *b=0.250000; break;
+ case 184: *r=0.793333; *g=0.333333; *b=0.333333; break;
+ case 185: *r=0.819167; *g=0.416667; *b=0.416667; break;
+ case 186: *r=0.845000; *g=0.500000; *b=0.500000; break;
+ case 187: *r=0.870833; *g=0.583333; *b=0.583333; break;
+ case 188: *r=0.896667; *g=0.666667; *b=0.666667; break;
+ case 189: *r=0.922500; *g=0.750000; *b=0.750000; break;
+ case 190: *r=0.948333; *g=0.833333; *b=0.833333; break;
+ case 191: *r=0.974167; *g=0.916667; *b=0.916667; break;
+ case 192: *r=1.000000; *g=1.000000; *b=1.000000; break;
case 193: *r=1.000000; *g=1.000000; *b=1.000000; break;
case 194: *r=1.000000; *g=1.000000; *b=1.000000; break;
case 195: *r=1.000000; *g=1.000000; *b=1.000000; break;
diff --git a/bin/convertt/ui.c b/bin/convertt/ui.c
index 56f405f..616314f 100644
--- a/bin/convertt/ui.c
+++ b/bin/convertt/ui.c
@@ -110,6 +110,7 @@ ui_initialize_options(struct converttparams *p,
/* Set the necessary common parameters structure. */
+ cp->program_struct = p;
cp->poptions = program_options;
cp->program_name = PROGRAM_NAME;
cp->program_exec = PROGRAM_EXEC;
diff --git a/bin/convolve/Makefile.am b/bin/convolve/Makefile.am
index 72aeadd..a5c42a3 100644
--- a/bin/convolve/Makefile.am
+++ b/bin/convolve/Makefile.am
@@ -23,13 +23,17 @@
AM_LDFLAGS = -L\$(top_builddir)/lib
AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
+if COND_NORPATH
+ MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
## Program definition (name, linking, sources and headers)
bin_PROGRAMS = astconvolve
## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astconvolve_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astconvolve_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+ $(MAYBE_NORPATH)
astconvolve_SOURCES = main.c ui.c convolve.c
diff --git a/bin/cosmiccal/Makefile.am b/bin/cosmiccal/Makefile.am
index 96cafdd..984a93a 100644
--- a/bin/cosmiccal/Makefile.am
+++ b/bin/cosmiccal/Makefile.am
@@ -23,13 +23,17 @@
AM_LDFLAGS = -L\$(top_builddir)/lib
AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
+if COND_NORPATH
+ MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
## Program definition (name, linking, sources and headers)
bin_PROGRAMS = astcosmiccal
## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astcosmiccal_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astcosmiccal_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+ $(MAYBE_NORPATH)
astcosmiccal_SOURCES = main.c ui.c cosmiccal.c
diff --git a/bin/cosmiccal/args.h b/bin/cosmiccal/args.h
index d241ca0..1876746 100644
--- a/bin/cosmiccal/args.h
+++ b/bin/cosmiccal/args.h
@@ -301,6 +301,21 @@ struct argp_option program_options[] =
GAL_OPTIONS_NOT_SET,
ui_add_to_single_value,
},
+ {
+ "lineatz",
+ UI_KEY_LINEATZ,
+ "STR/FLT",
+ 0,
+ "Wavelength of given line at chosen redshift",
+ UI_GROUP_SPECIFIC,
+ &p->specific,
+ GAL_TYPE_STRING,
+ GAL_OPTIONS_RANGE_ANY,
+ GAL_OPTIONS_NOT_MANDATORY,
+ GAL_OPTIONS_NOT_SET,
+ ui_add_to_single_value,
+ },
+
{0}
diff --git a/bin/cosmiccal/cosmiccal.c b/bin/cosmiccal/cosmiccal.c
index 9d52a29..a7d665f 100644
--- a/bin/cosmiccal/cosmiccal.c
+++ b/bin/cosmiccal/cosmiccal.c
@@ -255,6 +255,10 @@ cosmiccal(struct cosmiccalparams *p)
p->oradiation));
break;
+ case UI_KEY_LINEATZ:
+ printf("%g ", gal_list_f64_pop(&p->specific_arg)*(1+p->redshift));
+ break;
+
default:
error(EXIT_FAILURE, 0, "%s: a bug! Please contact us at %s to "
"fix the problem. The code %d is not recognized as a "
diff --git a/bin/cosmiccal/main.h b/bin/cosmiccal/main.h
index 958a8db..0801aa6 100644
--- a/bin/cosmiccal/main.h
+++ b/bin/cosmiccal/main.h
@@ -55,6 +55,7 @@ struct cosmiccalparams
/* Outputs. */
gal_list_i32_t *specific; /* Codes for single row calculations. */
+ gal_list_f64_t *specific_arg; /* Possible arguments for single calcs. */
/* Internal: */
time_t rawtime; /* Starting time of the program. */
diff --git a/bin/cosmiccal/ui.c b/bin/cosmiccal/ui.c
index 8e31895..0340ae7 100644
--- a/bin/cosmiccal/ui.c
+++ b/bin/cosmiccal/ui.c
@@ -106,6 +106,7 @@ ui_initialize_options(struct cosmiccalparams *p,
struct gal_options_common_params *cp=&p->cp;
/* Set the necessary common parameters structure. */
+ cp->program_struct = p;
cp->poptions = program_options;
cp->program_name = PROGRAM_NAME;
cp->program_exec = PROGRAM_EXEC;
@@ -199,6 +200,10 @@ static void *
ui_add_to_single_value(struct argp_option *option, char *arg,
char *filename, size_t lineno, void *params)
{
+ int linecode;
+ double *dptr, val=NAN;
+ struct cosmiccalparams *p = (struct cosmiccalparams *)params;
+
/* In case of printing the option values. */
if(lineno==-1)
error(EXIT_FAILURE, 0, "currently the options to be printed in one row "
@@ -211,18 +216,43 @@ ui_add_to_single_value(struct argp_option *option, char
*arg,
/* If this option is given in a configuration file, then `arg' will not
be NULL and we don't want to do anything if it is `0'. */
- if(arg)
+ switch(option->key)
{
- /* Make sure the value is only `0' or `1'. */
- if( arg[1]!='\0' && *arg!='0' && *arg!='1' )
- error_at_line(EXIT_FAILURE, 0, filename, lineno, "the `--%s' "
- "option takes no arguments. In a configuration "
- "file it can only have the values `1' or `0', "
- "indicating if it should be used or not",
- option->name);
-
- /* Only proceed if the (possibly given) argument is 1. */
- if(arg[0]=='0' && arg[1]=='\0') return NULL;
+ /* Options with arguments. */
+ case UI_KEY_LINEATZ:
+ /* Make sure an argument is given. */
+ if(arg==NULL)
+ error(EXIT_FAILURE, 0, "option `--lineatz' needs an argument");
+
+ /* If the argument is a number, read it, if not, see if its a known
+ specral line name. */
+ dptr=&val;
+ if( gal_type_from_string((void **)(&dptr), arg, GAL_TYPE_FLOAT64) )
+ {
+ linecode=gal_speclines_line_code(arg);
+ if(linecode==GAL_SPECLINES_INVALID)
+ error(EXIT_FAILURE, 0, "`%s' not a known spectral line name",
+ arg);
+ val=gal_speclines_line_angstrom(linecode);
+ }
+ gal_list_f64_add(&p->specific_arg, val);
+ break;
+
+ /* Options without arguments. */
+ default:
+ if(arg)
+ {
+ /* Make sure the value is only `0' or `1'. */
+ if( arg[1]!='\0' && *arg!='0' && *arg!='1' )
+ error_at_line(EXIT_FAILURE, 0, filename, lineno, "the `--%s' "
+ "option takes no arguments. In a configuration "
+ "file it can only have the values `1' or `0', "
+ "indicating if it should be used or not",
+ option->name);
+
+ /* Only proceed if the (possibly given) argument is 1. */
+ if(arg[0]=='0' && arg[1]=='\0') return NULL;
+ }
}
/* Add this option to the print list and return. */
@@ -279,10 +309,10 @@ ui_parse_obsline(struct argp_option *option, char *arg,
obsline=gal_options_parse_list_of_numbers(arg, filename, lineno);
/* Only one number must be given as second argument. */
- if(obsline->size!=1)
- error(EXIT_FAILURE, 0, "too many values (%zu) given to `--obsline'. "
- "Only two values (line name/wavelengh, and observed wavelengh) "
- "must be given", obsline->size+1);
+ if(obsline==NULL || obsline->size!=1)
+ error(EXIT_FAILURE, 0, "Wrong format given to `--obsline'. Only "
+ "two values (line name/wavelengh, and observed wavelengh) "
+ "must be given to it");
/* If a wavelength is given directly as a number (not a name), then
put that number in a second element of the array. */
@@ -408,6 +438,7 @@ ui_preparations(struct cosmiccalparams *p)
control reaches here, the list is finalized. So we should just reverse
it so the user gets values in the same order they requested them. */
gal_list_i32_reverse(&p->specific);
+ gal_list_f64_reverse(&p->specific_arg);
}
diff --git a/bin/cosmiccal/ui.h b/bin/cosmiccal/ui.h
index dcc57f9..0b36e52 100644
--- a/bin/cosmiccal/ui.h
+++ b/bin/cosmiccal/ui.h
@@ -42,7 +42,7 @@ enum program_args_groups
/* Available letters for short options:
- f i j k n p t w x y
+ f j k n p t w x y
B E J Q R W X Y
*/
enum option_keys_enum
@@ -68,6 +68,7 @@ enum option_keys_enum
UI_KEY_LOOKBACKTIME = 'b',
UI_KEY_CRITICALDENSITY = 'c',
UI_KEY_VOLUME = 'v',
+ UI_KEY_LINEATZ = 'i',
/* Only with long version (start with a value 1000, the rest will be set
automatically). */
diff --git a/bin/crop/Makefile.am b/bin/crop/Makefile.am
index e6a86a2..b644d67 100644
--- a/bin/crop/Makefile.am
+++ b/bin/crop/Makefile.am
@@ -23,13 +23,17 @@
AM_LDFLAGS = -L\$(top_builddir)/lib
AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
+if COND_NORPATH
+ MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
## Program definition (name, linking, sources and headers)
bin_PROGRAMS = astcrop
## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astcrop_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astcrop_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+ $(MAYBE_NORPATH)
astcrop_SOURCES = main.c ui.c crop.c wcsmode.c onecrop.c
diff --git a/bin/crop/ui.c b/bin/crop/ui.c
index 939826b..e2dd9de 100644
--- a/bin/crop/ui.c
+++ b/bin/crop/ui.c
@@ -112,6 +112,7 @@ ui_initialize_options(struct cropparams *p,
/* Set the necessary common parameters structure. */
+ cp->program_struct = p;
cp->program_name = PROGRAM_NAME;
cp->program_exec = PROGRAM_EXEC;
cp->program_bibtex = PROGRAM_BIBTEX;
@@ -937,6 +938,7 @@ ui_preparations(struct cropparams *p)
if(p->mode==IMGCROP_MODE_WCS) wcsmode_check_prepare(p, img);
}
+
/* Polygon cropping is currently only supported on 2D */
if(p->imgs->ndim!=2 && p->polygon)
error(EXIT_FAILURE, 0, "%s: polygon cropping is currently only "
diff --git a/bin/fits/Makefile.am b/bin/fits/Makefile.am
index 8d7d0e8..a112c75 100644
--- a/bin/fits/Makefile.am
+++ b/bin/fits/Makefile.am
@@ -23,13 +23,17 @@
AM_LDFLAGS = -L\$(top_builddir)/lib
AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
+if COND_NORPATH
+ MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
## Program definition (name, linking, sources and headers)
bin_PROGRAMS = astfits
## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astfits_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astfits_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+ $(MAYBE_NORPATH)
astfits_SOURCES = main.c ui.c extension.c fits.c keywords.c
diff --git a/bin/fits/ui.c b/bin/fits/ui.c
index 1d69fd4..ca86eee 100644
--- a/bin/fits/ui.c
+++ b/bin/fits/ui.c
@@ -99,6 +99,7 @@ ui_initialize_options(struct fitsparams *p,
/* Set the necessary common parameters structure. */
cp->keep = 1;
+ cp->program_struct = p;
cp->poptions = program_options;
cp->program_name = PROGRAM_NAME;
cp->program_exec = PROGRAM_EXEC;
diff --git a/bin/gnuastro.conf b/bin/gnuastro.conf
index 1ed1845..115b78e 100644
--- a/bin/gnuastro.conf
+++ b/bin/gnuastro.conf
@@ -39,4 +39,4 @@
# Operating mode
quietmmap 0
- minmapsize 2000000000
\ No newline at end of file
+ minmapsize 2000000000
diff --git a/bin/match/Makefile.am b/bin/match/Makefile.am
index c11c58d..9e9fe33 100644
--- a/bin/match/Makefile.am
+++ b/bin/match/Makefile.am
@@ -23,13 +23,17 @@
AM_LDFLAGS = -L\$(top_builddir)/lib
AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
+if COND_NORPATH
+ MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
## Program definition (name, linking, sources and headers)
bin_PROGRAMS = astmatch
## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astmatch_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astmatch_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+ $(MAYBE_NORPATH)
astmatch_SOURCES = main.c ui.c match.c
diff --git a/bin/match/ui.c b/bin/match/ui.c
index d188f2c..dacb0d3 100644
--- a/bin/match/ui.c
+++ b/bin/match/ui.c
@@ -100,6 +100,7 @@ ui_initialize_options(struct matchparams *p,
/* Set the necessary common parameters structure. */
+ cp->program_struct = p;
cp->poptions = program_options;
cp->program_name = PROGRAM_NAME;
cp->program_exec = PROGRAM_EXEC;
diff --git a/bin/mkcatalog/Makefile.am b/bin/mkcatalog/Makefile.am
index b834dff..19b30f3 100644
--- a/bin/mkcatalog/Makefile.am
+++ b/bin/mkcatalog/Makefile.am
@@ -23,13 +23,17 @@
AM_LDFLAGS = -L\$(top_builddir)/lib
AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
+if COND_NORPATH
+ MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
## Program definition (name, linking, sources and headers)
bin_PROGRAMS = astmkcatalog
## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astmkcatalog_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astmkcatalog_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+ $(MAYBE_NORPATH)
astmkcatalog_SOURCES = main.c ui.c mkcatalog.c columns.c upperlimit.c parse.c
diff --git a/bin/mknoise/Makefile.am b/bin/mknoise/Makefile.am
index 6f09968..555f7fd 100644
--- a/bin/mknoise/Makefile.am
+++ b/bin/mknoise/Makefile.am
@@ -23,13 +23,17 @@
AM_LDFLAGS = -L\$(top_builddir)/lib
AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
+if COND_NORPATH
+ MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
## Program definition (name, linking, sources and headers)
bin_PROGRAMS = astmknoise
## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astmknoise_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astmknoise_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+ $(MAYBE_NORPATH)
astmknoise_SOURCES = main.c ui.c mknoise.c
diff --git a/bin/mknoise/ui.c b/bin/mknoise/ui.c
index c30cd7b..b1f775d 100644
--- a/bin/mknoise/ui.c
+++ b/bin/mknoise/ui.c
@@ -104,6 +104,7 @@ ui_initialize_options(struct mknoiseparams *p,
/* Set the necessary common parameters structure. */
+ cp->program_struct = p;
cp->program_name = PROGRAM_NAME;
cp->program_exec = PROGRAM_EXEC;
cp->program_bibtex = PROGRAM_BIBTEX;
@@ -437,6 +438,7 @@ ui_free_report(struct mknoiseparams *p, struct timeval *t1)
/* Free the allocated arrays: */
free(p->cp.hdu);
free(p->cp.output);
+ gsl_rng_free(p->rng);
gal_data_free(p->input);
/* Print the final message. */
diff --git a/bin/mkprof/Makefile.am b/bin/mkprof/Makefile.am
index cfaa9ea..884a270 100644
--- a/bin/mkprof/Makefile.am
+++ b/bin/mkprof/Makefile.am
@@ -23,13 +23,17 @@
AM_LDFLAGS = -L\$(top_builddir)/lib
AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
+if COND_NORPATH
+ MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
## Program definition (name, linking, sources and headers)
bin_PROGRAMS = astmkprof
## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astmkprof_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astmkprof_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+ $(MAYBE_NORPATH)
astmkprof_SOURCES = main.c ui.c mkprof.c oneprofile.c profiles.c
diff --git a/bin/mkprof/ui.c b/bin/mkprof/ui.c
index f7b4fc0..ce000ab 100644
--- a/bin/mkprof/ui.c
+++ b/bin/mkprof/ui.c
@@ -178,6 +178,7 @@ ui_initialize_options(struct mkprofparams *p,
struct gal_options_common_params *cp=&p->cp;
/* Set the necessary common parameters structure. */
+ cp->program_struct = p;
cp->program_name = PROGRAM_NAME;
cp->program_exec = PROGRAM_EXEC;
cp->program_bibtex = PROGRAM_BIBTEX;
diff --git a/bin/noisechisel/Makefile.am b/bin/noisechisel/Makefile.am
index 873a112..86633b1 100644
--- a/bin/noisechisel/Makefile.am
+++ b/bin/noisechisel/Makefile.am
@@ -23,13 +23,17 @@
AM_LDFLAGS = -L\$(top_builddir)/lib
AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
+if COND_NORPATH
+ MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
## Program definition (name, linking, sources and headers)
bin_PROGRAMS = astnoisechisel
## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astnoisechisel_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astnoisechisel_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la \
+ -lgnuastro $(MAYBE_NORPATH)
astnoisechisel_SOURCES = main.c ui.c detection.c noisechisel.c sky.c \
threshold.c
diff --git a/bin/noisechisel/detection.c b/bin/noisechisel/detection.c
index e8895ce..6baa8c4 100644
--- a/bin/noisechisel/detection.c
+++ b/bin/noisechisel/detection.c
@@ -991,9 +991,11 @@ detection_remove_false_initial(struct noisechiselparams *p,
e_th=p->exp_thresh_full->array;
do /* Growth is necessary later. */
{ /* So there is no need to set */
- if(*l!=GAL_BLANK_INT32) /* the labels image, but we */
- { /* have to count the number of */
- *b = newlabels[ *l ] > 0; /* pixels to (possibly) grow. */
+ if(*l==GAL_BLANK_INT32) /* the labels image, but we */
+ *b=GAL_BLANK_UINT8; /* have to count the number of */
+ else /* pixels to (possibly) grow. */
+ {
+ *b = newlabels[ *l ] > 0;
if( *b==0 && *arr>*e_th )
++p->numexpand;
}
@@ -1001,11 +1003,11 @@ detection_remove_false_initial(struct noisechiselparams
*p,
}
while(++l<lf);
+
/* If there aren't any pixels to later expand, then reset the labels
(remove false detections in the labeled image). */
if(p->numexpand==0)
{
- b=workbin->array;
l=p->olabel->array;
do if(*l!=GAL_BLANK_INT32) *l = newlabels[ *l ]; while(++l<lf);
}
diff --git a/bin/noisechisel/ui.c b/bin/noisechisel/ui.c
index 017daf3..e930a0e 100644
--- a/bin/noisechisel/ui.c
+++ b/bin/noisechisel/ui.c
@@ -107,6 +107,7 @@ ui_initialize_options(struct noisechiselparams *p,
/* Set the necessary common parameters structure. */
+ cp->program_struct = p;
cp->poptions = program_options;
cp->program_name = PROGRAM_NAME;
cp->program_exec = PROGRAM_EXEC;
diff --git a/bin/segment/Makefile.am b/bin/segment/Makefile.am
index ea56135..56bb770 100644
--- a/bin/segment/Makefile.am
+++ b/bin/segment/Makefile.am
@@ -23,13 +23,17 @@
AM_LDFLAGS = -L\$(top_builddir)/lib
AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
+if COND_NORPATH
+ MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
## Program definition (name, linking, sources and headers)
bin_PROGRAMS = astsegment
## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astsegment_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astsegment_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+ $(MAYBE_NORPATH)
astsegment_SOURCES = main.c ui.c segment.c clumps.c
diff --git a/bin/segment/ui.c b/bin/segment/ui.c
index 55d591d..77b1a44 100644
--- a/bin/segment/ui.c
+++ b/bin/segment/ui.c
@@ -107,6 +107,7 @@ ui_initialize_options(struct segmentparams *p,
/* Set the necessary common parameters structure. */
+ cp->program_struct = p;
cp->poptions = program_options;
cp->program_name = PROGRAM_NAME;
cp->program_exec = PROGRAM_EXEC;
@@ -405,7 +406,7 @@ static void
ui_prepare_inputs(struct segmentparams *p)
{
int32_t *i, *ii;
- gal_data_t *maxd, *ccin, *ccout=NULL;
+ gal_data_t *maxd, *ccin, *blankflag, *ccout=NULL;
/* Read the input as a single precision floating point dataset. */
p->input = gal_array_read_one_ch_to_type(p->inputname, p->cp.hdu,
@@ -483,6 +484,16 @@ ui_prepare_inputs(struct segmentparams *p)
p->dhdu, gal_type_name(p->olabel->type, 1),
p->useddetectionname, p->dhdu);
+ /* If the input has blank values, set them to blank values in the
+ labeled image too. It doesn't matter if the labeled image has
+ blank pixels that aren't blank on the input image. */
+ if(gal_blank_present(p->input, 1))
+ {
+ blankflag=gal_blank_flag(p->input);
+ gal_blank_flag_apply(p->olabel, blankflag);
+ gal_data_free(blankflag);
+ }
+
/* Get the maximum value of the input (total number of labels if they
are separate). If the maximum is 1 (the image is a binary image),
then apply the connected components algorithm to separate the
diff --git a/bin/statistics/Makefile.am b/bin/statistics/Makefile.am
index 972cc54..4106976 100644
--- a/bin/statistics/Makefile.am
+++ b/bin/statistics/Makefile.am
@@ -23,13 +23,17 @@
AM_LDFLAGS = -L\$(top_builddir)/lib
AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
+if COND_NORPATH
+ MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
## Program definition (name, linking, sources and headers)
bin_PROGRAMS = aststatistics
## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-aststatistics_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+aststatistics_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la \
+ -lgnuastro $(MAYBE_NORPATH)
aststatistics_SOURCES = main.c ui.c sky.c statistics.c
diff --git a/bin/table/Makefile.am b/bin/table/Makefile.am
index 244830a..8b4dcdb 100644
--- a/bin/table/Makefile.am
+++ b/bin/table/Makefile.am
@@ -23,13 +23,17 @@
AM_LDFLAGS = -L\$(top_builddir)/lib
AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
+if COND_NORPATH
+ MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
## Program definition (name, linking, sources and headers)
bin_PROGRAMS = asttable
## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-asttable_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+asttable_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+ $(MAYBE_NORPATH)
asttable_SOURCES = main.c ui.c arithmetic.c table.c
diff --git a/bin/table/args.h b/bin/table/args.h
index 144616c..2f2506e 100644
--- a/bin/table/args.h
+++ b/bin/table/args.h
@@ -102,13 +102,24 @@ struct argp_option program_options[] =
GAL_OPTIONS_NOT_MANDATORY,
GAL_OPTIONS_NOT_SET
},
+
+
+
+
+
+ /* Output Rows */
+ {
+ 0, 0, 0, 0,
+ "Rows in output:",
+ UI_GROUP_OUTROWS
+ },
{
"range",
UI_KEY_RANGE,
"STR,FLT:FLT",
0,
"Column, and range to limit output.",
- GAL_OPTIONS_GROUP_OUTPUT,
+ UI_GROUP_OUTROWS,
&p->range,
GAL_TYPE_STRING,
GAL_OPTIONS_RANGE_ANY,
@@ -117,12 +128,40 @@ struct argp_option program_options[] =
gal_options_parse_name_and_values
},
{
+ "equal",
+ UI_KEY_EQUAL,
+ "STR,FLT,FLT",
+ 0,
+ "Column, values to keep in output.",
+ UI_GROUP_OUTROWS,
+ &p->equal,
+ GAL_TYPE_STRING,
+ GAL_OPTIONS_RANGE_ANY,
+ GAL_OPTIONS_NOT_MANDATORY,
+ GAL_OPTIONS_NOT_SET,
+ gal_options_parse_name_and_values
+ },
+ {
+ "notequal",
+ UI_KEY_NOTEQUAL,
+ "STR,FLT,FLT",
+ 0,
+ "Column, values to remove from output.",
+ UI_GROUP_OUTROWS,
+ &p->notequal,
+ GAL_TYPE_STRING,
+ GAL_OPTIONS_RANGE_ANY,
+ GAL_OPTIONS_NOT_MANDATORY,
+ GAL_OPTIONS_NOT_SET,
+ gal_options_parse_name_and_values
+ },
+ {
"sort",
UI_KEY_SORT,
"STR,INT",
0,
"Column name or number for sorting.",
- GAL_OPTIONS_GROUP_OUTPUT,
+ UI_GROUP_OUTROWS,
&p->sort,
GAL_TYPE_STRING,
GAL_OPTIONS_RANGE_ANY,
@@ -135,7 +174,7 @@ struct argp_option program_options[] =
0,
0,
"Sort in descending order: largets first.",
- GAL_OPTIONS_GROUP_OUTPUT,
+ UI_GROUP_OUTROWS,
&p->descending,
GAL_OPTIONS_NO_ARG_TYPE,
GAL_OPTIONS_RANGE_0_OR_1,
@@ -148,7 +187,7 @@ struct argp_option program_options[] =
"INT",
0,
"Only output given number of top rows.",
- GAL_OPTIONS_GROUP_OUTPUT,
+ UI_GROUP_OUTROWS,
&p->head,
GAL_TYPE_SIZE_T,
GAL_OPTIONS_RANGE_GE_0,
@@ -161,7 +200,7 @@ struct argp_option program_options[] =
"INT",
0,
"Only output given number of bottom rows.",
- GAL_OPTIONS_GROUP_OUTPUT,
+ UI_GROUP_OUTROWS,
&p->tail,
GAL_TYPE_SIZE_T,
GAL_OPTIONS_RANGE_GE_0,
@@ -172,7 +211,6 @@ struct argp_option program_options[] =
-
/* End. */
{0}
};
diff --git a/bin/table/main.h b/bin/table/main.h
index 44694f6..0a55512 100644
--- a/bin/table/main.h
+++ b/bin/table/main.h
@@ -33,14 +33,26 @@ along with Gnuastro. If not, see
<http://www.gnu.org/licenses/>.
#define PROGRAM_EXEC "asttable" /* Program executable name. */
#define PROGRAM_STRING PROGRAM_NAME" (" PACKAGE_NAME ") " PACKAGE_VERSION
+/* Row selection types. */
+enum select_types
+{
+ /* Different types of row-selection */
+ SELECT_TYPE_RANGE, /* 0 by C standard */
+ SELECT_TYPE_EQUAL,
+ SELECT_TYPE_NOTEQUAL,
+
+ /* This marks the total number of row-selection criteria. */
+ SELECT_TYPE_NUMBER,
+};
/* Basic structure. */
-struct list_range
+struct list_select
{
- gal_data_t *v;
- struct list_range *next;
+ gal_data_t *col;
+ int type;
+ struct list_select *next;
};
struct arithmetic_token
@@ -77,6 +89,8 @@ struct tableparams
uint8_t information; /* ==1: only print FITS information. */
uint8_t colinfoinstdout; /* ==1: print column metadata in CL. */
gal_data_t *range; /* Range to limit output. */
+ gal_data_t *equal; /* Values to keep in output. */
+ gal_data_t *notequal; /* Values to not include in output. */
char *sort; /* Column name or number for sorting. */
uint8_t descending; /* Sort columns in descending order. */
size_t head; /* Output only the no. of top rows. */
@@ -89,9 +103,11 @@ struct tableparams
int nwcs; /* Number of WCS structures. */
gal_data_t *allcolinfo; /* Information of all the columns. */
gal_data_t *sortcol; /* Column to define a sorting. */
- struct list_range *rangecol; /* Column to define a range. */
+ int selection; /* Any row-selection is requested. */
+ gal_data_t *select; /* Select rows for output. */
+ struct list_select *selectcol; /* Column to define a range. */
uint8_t freesort; /* If the sort column should be freed. */
- uint8_t *freerange; /* If the range column should be freed. */
+ uint8_t *freeselect; /* If selection columns should be freed.*/
uint8_t sortin; /* If the sort column is in the output. */
time_t rawtime; /* Starting time of the program. */
gal_data_t **colarray; /* Array of columns, with arithmetic. */
diff --git a/bin/table/table.c b/bin/table/table.c
index 6051c38..7db80f7 100644
--- a/bin/table/table.c
+++ b/bin/table/table.c
@@ -75,81 +75,207 @@ table_apply_permutation(gal_data_t *table, size_t
*permutation,
-static void
-table_range(struct tableparams *p)
+static gal_data_t *
+table_selection_range(struct tableparams *p, gal_data_t *col)
{
- uint8_t *u;
- double *rarr;
- gal_data_t *mask;
- struct list_range *tmp;
- gal_data_t *ref, *perm, *range, *blmask;
- size_t i, g, b, *s, *sf, one=1, ngood=0;
- gal_data_t *min, *max, *ltmin, *gemax, *sum;
-
+ size_t one=1;
+ double *darr;
int numok=GAL_ARITHMETIC_NUMOK;
int inplace=GAL_ARITHMETIC_INPLACE;
+ gal_data_t *min=NULL, *max=NULL, *tmp, *ltmin, *gemax=NULL;
- /* Allocate datasets for the necessary numbers and write them in. */
+ /* First, make sure everything is OK. */
+ if(p->range==NULL)
+ error(EXIT_FAILURE, 0, "%s: a bug! Please contact us to fix the "
+ "problem at %s. `p->range' should not be NULL at this point",
+ __func__, PACKAGE_BUGREPORT);
+
+ /* Allocations. */
min=gal_data_alloc(NULL, GAL_TYPE_FLOAT64, 1, &one, NULL, 0, -1, 1,
NULL, NULL, NULL);
max=gal_data_alloc(NULL, GAL_TYPE_FLOAT64, 1, &one, NULL, 0, -1, 1,
NULL, NULL, NULL);
+
+ /* Read the range of values for this column. */
+ darr=p->range->array;
+ ((double *)(min->array))[0] = darr[0];
+ ((double *)(max->array))[0] = darr[1];
+
+ /* Move `p->range' to the next element in the list and free the current
+ one (we have already read its values and don't need it any more). */
+ tmp=p->range;
+ p->range=p->range->next;
+ gal_data_free(tmp);
+
+ /* Find all the elements outside this range (smaller than the minimum,
+ larger than the maximum or blank) as separate binary flags.. */
+ ltmin=gal_arithmetic(GAL_ARITHMETIC_OP_LT, 1, numok, col, min);
+ gemax=gal_arithmetic(GAL_ARITHMETIC_OP_GE, 1, numok, col, max);
+
+ /* Merge them both into one array. */
+ ltmin=gal_arithmetic(GAL_ARITHMETIC_OP_OR, 1, inplace, ltmin, gemax);
+
+ /* For a check.
+ {
+ size_t i;
+ uint8_t *u=ltmin->array;
+ for(i=0;i<ltmin->size;++i) printf("%zu: %u\n", i, u[i]);
+ exit(0);
+ }
+ */
+
+ /* Clean up and return. */
+ gal_data_free(gemax);
+ gal_data_free(min);
+ gal_data_free(max);
+ return ltmin;
+}
+
+
+
+
+
+static gal_data_t *
+table_selection_equal_or_notequal(struct tableparams *p, gal_data_t *col,
+ int e0n1)
+{
+ double *darr;
+ size_t i, one=1;
+ int numok=GAL_ARITHMETIC_NUMOK;
+ int inplace=GAL_ARITHMETIC_INPLACE;
+ gal_data_t *eq, *out=NULL, *value=NULL;
+ gal_data_t *arg = e0n1 ? p->notequal : p->equal;
+
+ /* Note that this operator is used to make the "masked" array, so when
+ `e0n1==0' the operator should be `GAL_ARITHMETIC_OP_NE' and
+ vice-versa.
+
+ For the merging with other elements, when `e0n1==0', we need the
+ `GAL_ARITHMETIC_OP_AND', but for `e0n1==1', it should be `OR'. */
+ int mergeop = e0n1 ? GAL_ARITHMETIC_OP_OR : GAL_ARITHMETIC_OP_AND;
+ int operator = e0n1 ? GAL_ARITHMETIC_OP_EQ : GAL_ARITHMETIC_OP_NE;
+
+ /* First, make sure everything is OK. */
+ if(arg==NULL)
+ error(EXIT_FAILURE, 0, "%s: a bug! Please contact us to fix the "
+ "problem at %s. `p->range' should not be NULL at this point",
+ __func__, PACKAGE_BUGREPORT);
+
+ /* Allocate space for the value. */
+ value=gal_data_alloc(NULL, GAL_TYPE_FLOAT64, 1, &one, NULL, 0, -1, 1,
+ NULL, NULL, NULL);
+
+ /* Go through the values given to this call of the option and flag the
+ elements. */
+ for(i=0;i<arg->size;++i)
+ {
+ darr=arg->array;
+ ((double *)(value->array))[0] = darr[i];
+ eq=gal_arithmetic(operator, 1, numok, col, value);
+ if(out)
+ {
+ out=gal_arithmetic(mergeop, 1, inplace, out, eq);
+ gal_data_free(eq);
+ }
+ else
+ out=eq;
+ }
+
+ /* For a check.
+ {
+ uint8_t *u=out->array;
+ for(i=0;i<out->size;++i) printf("%zu: %u\n", i, u[i]);
+ exit(0);
+ }
+ */
+
+ /* Move the main pointer to the next possible call of the given
+ option. With this, we can safely free `arg' at this point. */
+ if(e0n1) p->notequal=p->notequal->next;
+ else p->equal=p->equal->next;
+
+ /* Clean up and return. */
+ gal_data_free(value);
+ gal_data_free(arg);
+ return out;
+}
+
+
+
+
+
+static void
+table_selection(struct tableparams *p)
+{
+ uint8_t *u;
+ struct list_select *tmp;
+ gal_data_t *mask, *addmask=NULL;
+ gal_data_t *sum, *perm, *blmask;
+ size_t i, g, b, *s, *sf, ngood=0;
+ int inplace=GAL_ARITHMETIC_INPLACE;
+
+ /* Allocate datasets for the necessary numbers and write them in. */
perm=gal_data_alloc(NULL, GAL_TYPE_SIZE_T, 1, p->table->dsize, NULL, 0,
p->cp.minmapsize, p->cp.quietmmap, NULL, NULL, NULL);
mask=gal_data_alloc(NULL, GAL_TYPE_UINT8, 1, p->table->dsize, NULL, 1,
p->cp.minmapsize, p->cp.quietmmap, NULL, NULL, NULL);
- /* Go over all the necessary range options. */
- range=p->range;
- for(tmp=p->rangecol;tmp!=NULL;tmp=tmp->next)
+ /* Go over each selection criteria and remove the necessary elements. */
+ for(tmp=p->selectcol;tmp!=NULL;tmp=tmp->next)
{
- /* Set the minimum and maximum values. */
- rarr=range->array;
- ((double *)(min->array))[0] = rarr[0];
- ((double *)(max->array))[0] = rarr[1];
-
- /* Set the reference column to read values from. */
- ref=tmp->v;
-
- /* Find all the bad elements (smaller than the minimum, larger than
- the maximum or blank) so we can flag them. */
- ltmin=gal_arithmetic(GAL_ARITHMETIC_OP_LT, 1, numok, ref, min);
- gemax=gal_arithmetic(GAL_ARITHMETIC_OP_GE, 1, numok, ref, max);
- blmask = ( gal_blank_present(ref, 1)
- ? gal_arithmetic(GAL_ARITHMETIC_OP_ISBLANK, 1, 0, ref)
- : NULL );
-
- /* Merge all the flags into one array. */
- ltmin=gal_arithmetic(GAL_ARITHMETIC_OP_OR, 1, inplace, ltmin, gemax);
- if(blmask)
- ltmin=gal_arithmetic(GAL_ARITHMETIC_OP_OR, 1, inplace, ltmin, blmask);
-
- /* Add these flags to all previous flags. */
- mask=gal_arithmetic(GAL_ARITHMETIC_OP_OR, 1, inplace, mask, ltmin);
+ switch(tmp->type)
+ {
+ case SELECT_TYPE_RANGE:
+ addmask=table_selection_range(p, tmp->col);
+ break;
+
+ case SELECT_TYPE_EQUAL:
+ addmask=table_selection_equal_or_notequal(p, tmp->col, 0);
+ break;
+
+ case SELECT_TYPE_NOTEQUAL:
+ addmask=table_selection_equal_or_notequal(p, tmp->col, 1);
+ break;
+
+ default:
+ error(EXIT_FAILURE, 0, "%s: a bug! Please contact us at %s "
+ "to fix the problem. The code %d is not a recognized "
+ "range identifier", __func__, PACKAGE_BUGREPORT,
+ tmp->type);
+ }
- /* For a check.
- {
- float *f=ref->array;
- uint8_t *m=mask->array;
- uint8_t *u=ltmin->array, *uf=u+ltmin->size;
- printf("\n\nInput column: %s\n", ref->name ? ref->name : "No Name");
- printf("Range: %g, %g\n", rarr[0], rarr[1]);
- printf("%-20s%-20s%-20s\n", "Value", "This mask",
- "Including previous");
- do printf("%-20f%-20u%-20u\n", *f++, *u++, *m++); while(u<uf);
- exit(0);
- }
- */
+ /* Remove any blank elements. */
+ if(gal_blank_present(tmp->col, 1))
+ {
+ blmask = gal_arithmetic(GAL_ARITHMETIC_OP_ISBLANK, 1, 0, tmp->col);
+ addmask=gal_arithmetic(GAL_ARITHMETIC_OP_OR, 1, inplace,
+ addmask, blmask);
+ gal_data_free(blmask);
+ }
- /* Clean up. */
- gal_data_free(ltmin);
- gal_data_free(gemax);
+ /* Add this mask array to the cumulative mask array (of all
+ selections). */
+ mask=gal_arithmetic(GAL_ARITHMETIC_OP_OR, 1, inplace, mask, addmask);
- /* Increment pointers. */
- range=range->next;
+ /* For a check.
+ {
+ float *f=ref->array;
+ uint8_t *m=mask->array;
+ uint8_t *u=addmask->array, *uf=u+addmask->size;
+ printf("\n\nInput column: %s\n", ref->name ? ref->name : "No Name");
+ printf("Range: %g, %g\n", rarr[0], rarr[1]);
+ printf("%-20s%-20s%-20s\n", "Value", "This mask",
+ "Including previous");
+ do printf("%-20f%-20u%-20u\n", *f++, *u++, *m++); while(u<uf);
+ exit(0);
+ }
+ */
+
+ /* Final clean up. */
+ gal_data_free(addmask);
}
- /* Count the number of bad elements. */
+ /* Find the final number of elements to print. */
sum=gal_statistics_sum(mask);
ngood = p->table->size - ((double *)(sum->array))[0];
@@ -185,15 +311,13 @@ table_range(struct tableparams *p)
/* Clean up. */
i=0;
- for(tmp=p->rangecol;tmp!=NULL;tmp=tmp->next)
- { if(p->freerange[i]) {gal_data_free(tmp->v); tmp->v=NULL;} ++i; }
- ui_list_range_free(p->rangecol, 0);
+ for(tmp=p->selectcol;tmp!=NULL;tmp=tmp->next)
+ { if(p->freeselect[i]) {gal_data_free(tmp->col); tmp->col=NULL;} ++i; }
+ ui_list_select_free(p->selectcol, 0);
gal_data_free(mask);
gal_data_free(perm);
+ free(p->freeselect);
gal_data_free(sum);
- gal_data_free(min);
- gal_data_free(max);
- free(p->freerange);
}
@@ -215,6 +339,22 @@ table_sort(struct tableparams *p)
p->cp.minmapsize, p->cp.quietmmap, NULL, NULL, NULL);
sf=(s=perm->array)+perm->size; do *s=c++; while(++s<sf);
+ /* For string columns, print a descriptive message. Note that some FITS
+ tables were found that do actually have numbers stored in string
+ types! */
+ if(p->sortcol->type==GAL_TYPE_STRING)
+ error(EXIT_FAILURE, 0, "sort column has a string type, but it can "
+ "(currently) only work on numbers.\n\n"
+ "TIP: if you know the columns contents are all numbers that are "
+ "just stored as strings, you can use this program to save the "
+ "table as a text file, modify the column meta-data (for example "
+ "to type `i32' or `f32' instead of `strN'), then use this "
+ "program again to save it as a FITS table.\n\n"
+ "For more on column metadata in plain text format, please run "
+ "the following command (or see the `Gnuastro text table format "
+ "section of the book/manual):\n\n"
+ " $ info gnuastro \"gnuastro text table format\"");
+
/* Set the proper qsort function. */
if(p->descending)
switch(p->sortcol->type)
@@ -333,7 +473,7 @@ void
table(struct tableparams *p)
{
/* Apply a certain range (if required) to the output sample. */
- if(p->range) table_range(p);
+ if(p->selection) table_selection(p);
/* Sort it (if required). */
if(p->sort) table_sort(p);
diff --git a/bin/table/ui.c b/bin/table/ui.c
index 86cc27d..d12cc4b 100644
--- a/bin/table/ui.c
+++ b/bin/table/ui.c
@@ -110,6 +110,7 @@ ui_initialize_options(struct tableparams *p,
/* Set the necessary common parameters structure. */
+ cp->program_struct = p;
cp->poptions = program_options;
cp->program_name = PROGRAM_NAME;
cp->program_exec = PROGRAM_EXEC;
@@ -237,8 +238,9 @@ ui_read_check_only_options(struct tableparams *p)
{
/* Range needs two input numbers. */
if(tmp->size!=2)
- error(EXIT_FAILURE, 0, "two values (separated by comma) necessary "
- "for `--range' in this format: `--range=COLUMN,min,max'");
+ error(EXIT_FAILURE, 0, "two values (separated by `:' or `,') are "
+ "necessary for `--range' in this format: "
+ "`--range=COLUMN,min:max'");
/* The first must be smaller than the second. */
darr=tmp->array;
@@ -247,6 +249,7 @@ ui_read_check_only_options(struct tableparams *p)
"be smaller than the second (%g)", darr[0], darr[1]);
}
+
/* Make sure `--head' and `--tail' aren't given together. */
if(p->head!=GAL_BLANK_SIZE_T && p->tail!=GAL_BLANK_SIZE_T)
error(EXIT_FAILURE, 0, "`--head' and `--tail' options cannot be "
@@ -291,19 +294,20 @@ ui_check_options_and_arguments(struct tableparams *p)
/**************************************************************/
-/*************** List of range datasets *******************/
+/************ List of row-selection requests **************/
/**************************************************************/
static void
-ui_list_range_add(struct list_range **list, gal_data_t *dataset)
+ui_list_select_add(struct list_select **list, gal_data_t *col, int type)
{
- struct list_range *newnode;
+ struct list_select *newnode;
errno=0;
newnode=malloc(sizeof *newnode);
if(newnode==NULL)
error(EXIT_FAILURE, errno, "%s: allocating new node", __func__);
- newnode->v=dataset;
+ newnode->col=col;
+ newnode->type=type;
newnode->next=*list;
*list=newnode;
}
@@ -313,15 +317,19 @@ ui_list_range_add(struct list_range **list, gal_data_t
*dataset)
static gal_data_t *
-ui_list_range_pop(struct list_range **list)
+ui_list_select_pop(struct list_select **list, int *type)
{
gal_data_t *out=NULL;
- struct list_range *tmp;
+ struct list_select *tmp;
if(*list)
{
+ /* Extract all the necessary components of the node. */
tmp=*list;
- out=tmp->v;
+ out=tmp->col;
+ *type=tmp->type;
*list=tmp->next;
+
+ /* Delete the node. */
free(tmp);
}
return out;
@@ -332,18 +340,19 @@ ui_list_range_pop(struct list_range **list)
static void
-ui_list_range_reverse(struct list_range **list)
+ui_list_select_reverse(struct list_select **list)
{
+ int thistype;
gal_data_t *thisdata;
- struct list_range *correctorder=NULL;
+ struct list_select *correctorder=NULL;
/* Only do the reversal if there is more than one element. */
if( *list && (*list)->next )
{
while(*list!=NULL)
{
- thisdata=ui_list_range_pop(list);
- ui_list_range_add(&correctorder, thisdata);
+ thisdata=ui_list_select_pop(list, &thistype);
+ ui_list_select_add(&correctorder, thisdata, thistype);
}
*list=correctorder;
}
@@ -354,14 +363,14 @@ ui_list_range_reverse(struct list_range **list)
void
-ui_list_range_free(struct list_range *list, int freevalue)
+ui_list_select_free(struct list_select *list, int freevalue)
{
- struct list_range *tmp;
+ struct list_select *tmp;
while(list!=NULL)
{
tmp=list->next;
if(freevalue)
- gal_data_free(list->v);
+ gal_data_free(list->col);
free(list);
list=tmp;
}
@@ -644,7 +653,7 @@ ui_columns_prepare(struct tableparams *p)
(starting from 0). So if we can read it as a number, we'll subtract one
from it. */
static size_t
-ui_check_range_sort_read_col_ind(char *string)
+ui_check_select_sort_read_col_ind(char *string)
{
size_t out;
void *ptr=&out;
@@ -660,44 +669,58 @@ ui_check_range_sort_read_col_ind(char *string)
-/* See if the `--range' and `--sort' columns should also be added. */
+/* See if row selection or sorting needs any extra columns to be read. */
static void
-ui_check_range_sort_before(struct tableparams *p, gal_list_str_t *lines,
- size_t *nrange, size_t *origoutncols,
- size_t *sortindout, size_t **rangeindout_out)
-{
- size_t *rangeind=NULL;
- size_t *rangeindout=NULL;
+ui_check_select_sort_before(struct tableparams *p, gal_list_str_t *lines,
+ size_t *nselect, size_t *origoutncols,
+ size_t *sortindout, size_t **selectindout_out,
+ size_t **selecttypeout_out)
+{;
gal_data_t *dtmp, *allcols;
size_t sortind=GAL_BLANK_SIZE_T;
- int tableformat, rangehasname=0;
gal_list_sizet_t *tmp, *indexll;
gal_list_str_t *stmp, *add=NULL;
- size_t i, j, *s, *sf, allncols, numcols, numrows;
+ int tableformat, selecthasname=0;
+ size_t *selectind=NULL, *selecttype=NULL;
+ size_t *selectindout=NULL, *selecttypeout=NULL;
+ size_t i, j, k, *s, *sf, allncols, numcols, numrows;
+
+ /* Important note: these have to be in the same order as the `enum
+ select_types' in `main.h'. */
+ gal_data_t *select[SELECT_TYPE_NUMBER]={p->range, p->equal, p->notequal};
/* Allocate necessary spaces. */
- if(p->range)
+ if(p->selection)
{
- *nrange=gal_list_data_number(p->range);
- rangeind=gal_pointer_allocate(GAL_TYPE_SIZE_T, *nrange, 0,
- __func__, "rangeind");
- rangeindout=gal_pointer_allocate(GAL_TYPE_SIZE_T, *nrange, 0,
- __func__, "rangeindout");
- sf=(s=rangeindout)+*nrange; do *s++=GAL_BLANK_SIZE_T; while(s<sf);
- *rangeindout_out=rangeindout;
+ *nselect = ( gal_list_data_number(p->range)
+ + gal_list_data_number(p->equal)
+ + gal_list_data_number(p->notequal) );
+ selectind=gal_pointer_allocate(GAL_TYPE_SIZE_T, *nselect, 0,
+ __func__, "selectind");
+ selecttype=gal_pointer_allocate(GAL_TYPE_SIZE_T, *nselect, 0,
+ __func__, "selecttype");
+ selectindout=gal_pointer_allocate(GAL_TYPE_SIZE_T, *nselect, 0,
+ __func__, "selectindout");
+ selecttypeout=gal_pointer_allocate(GAL_TYPE_SIZE_T, *nselect, 0,
+ __func__, "selecttypeout");
+ sf=(s=selectindout)+*nselect; do *s++=GAL_BLANK_SIZE_T; while(s<sf);
+ *selectindout_out=selectindout;
+ *selecttypeout_out=selecttypeout;
}
/* See if the given columns are numbers or names. */
i=0;
- if(p->sort) sortind = ui_check_range_sort_read_col_ind(p->sort);
- if(p->range)
- for(dtmp=p->range;dtmp!=NULL;dtmp=dtmp->next)
- {
- rangeind[i] = ui_check_range_sort_read_col_ind(dtmp->name);
- ++i;
- }
+ if(p->sort) sortind = ui_check_select_sort_read_col_ind(p->sort);
+ if(p->selection)
+ for(k=0;k<SELECT_TYPE_NUMBER;++k)
+ for(dtmp=select[k];dtmp!=NULL;dtmp=dtmp->next)
+ {
+ selecttype[i] = k;
+ selectind[i] = ui_check_select_sort_read_col_ind(dtmp->name);
+ ++i;
+ }
/* Get all the column information. */
@@ -713,21 +736,21 @@ ui_check_range_sort_before(struct tableparams *p,
gal_list_str_t *lines,
"number given to `--sort' (%s)",
gal_fits_name_save_as_string(p->filename, p->cp.hdu), numcols,
p->sort);
- if(p->range)
- for(i=0;i<*nrange;++i)
- if(rangeind[i]!=GAL_BLANK_SIZE_T && rangeind[i]>=numcols)
+ if(p->selection)
+ for(i=0;i<*nselect;++i)
+ if(selectind[i]!=GAL_BLANK_SIZE_T && selectind[i]>=numcols)
error(EXIT_FAILURE, 0, "%s has %zu columns, less than the column "
- "number given to `--range' (%zu)",
+ "number given to `--range', `--equal', or `--sort' (%zu)",
gal_fits_name_save_as_string(p->filename, p->cp.hdu), numcols,
- rangeind[i]);
+ selectind[i]);
/* If any of the columns isn't specified by an index, go over the table
information and set the index based on the names. */
- if(p->range)
- for(i=0;i<*nrange;++i)
- if(rangeind[i]==GAL_BLANK_SIZE_T) { rangehasname=1; break; }
- if( (p->sort && sortind==GAL_BLANK_SIZE_T) || rangehasname )
+ if(p->selection)
+ for(i=0;i<*nselect;++i)
+ if(selectind[i]==GAL_BLANK_SIZE_T) { selecthasname=1; break; }
+ if( (p->sort && sortind==GAL_BLANK_SIZE_T) || selecthasname )
{
/* For `--sort', go over all the columns if an index hasn't been set
yet. If the input columns have a name, see if their names matches
@@ -737,46 +760,48 @@ ui_check_range_sort_before(struct tableparams *p,
gal_list_str_t *lines,
if( allcols[i].name && !strcasecmp(allcols[i].name, p->sort) )
{ sortind=i; break; }
- /* Same for `--range'. Just note that here we may have multiple calls
- to `--range'. It is thus important to loop over the values given
- to range first, then loop over the column names from the start for
- each new `--ran */
+ /* Same for the selection. Just note that here we may have multiple
+ calls. It is thus important to loop over the values given to range
+ first, then loop over the column names from the start for each new
+ `--ran */
i=0;
- if(p->range)
- for(dtmp=p->range;dtmp!=NULL;dtmp=dtmp->next)
+ for(k=0;k<SELECT_TYPE_NUMBER;++k)
+ for(dtmp=select[k];dtmp!=NULL;dtmp=dtmp->next)
{
- if(rangeind[i]==GAL_BLANK_SIZE_T)
- for(j=0;j<numcols;++j)
- if( allcols[j].name
- && !strcasecmp(allcols[j].name, dtmp->name) )
- { rangeind[i]=j; break; }
- ++i;
+ if(selectind[i]==GAL_BLANK_SIZE_T)
+ for(j=0;j<numcols;++j)
+ if( allcols[j].name
+ && !strcasecmp(allcols[j].name, dtmp->name) )
+ { selecttype[i]=k; selectind[i]=j; break; }
+ ++i;
}
}
- /* Both columns must be good indexs now, if they aren't the user didn't
+ /* The columns must be good indexs now, if they don't the user didn't
specify the name properly and the program must abort. */
if( p->sort && sortind==GAL_BLANK_SIZE_T )
error(EXIT_FAILURE, 0, "%s: no column named `%s' (value to `--sort') "
"you can either specify a name or number",
gal_fits_name_save_as_string(p->filename, p->cp.hdu), p->sort);
- if(p->range)
+ if(p->selection)
{
i=0;
- for(dtmp=p->range;dtmp!=NULL;dtmp=dtmp->next)
- {
- if(rangeind[i]==GAL_BLANK_SIZE_T)
- error(EXIT_FAILURE, 0, "%s: no column named `%s' (value to "
- "`--range') you can either specify a name or number",
- gal_fits_name_save_as_string(p->filename, p->cp.hdu),
- dtmp->name);
- ++i;
- }
+ for(k=0;k<SELECT_TYPE_NUMBER;++k)
+ for(dtmp=select[k];dtmp!=NULL;dtmp=dtmp->next)
+ {
+ if(selectind[i]==GAL_BLANK_SIZE_T)
+ error(EXIT_FAILURE, 0, "%s: no column named `%s' (value to "
+ "`--%s') you can either specify a name or number",
+ gal_fits_name_save_as_string(p->filename, p->cp.hdu),
+ dtmp->name,
+ ( k==0?"range":( k==1?"equal":"notequal") ));
+ ++i;
+ }
}
- /* See which columns the user has asked for. */
+ /* See which columns the user has asked to output. */
indexll=gal_table_list_of_indexs(p->columns, allcols, numcols,
p->cp.searchin, p->cp.ignorecase,
p->filename, p->cp.hdu, NULL);
@@ -788,47 +813,53 @@ ui_check_range_sort_before(struct tableparams *p,
gal_list_str_t *lines,
i=0;
for(tmp=indexll; tmp!=NULL; tmp=tmp->next)
{
- if(p->sort && *sortindout==GAL_BLANK_SIZE_T && tmp->v == sortind)
+ if(p->sort && *sortindout==GAL_BLANK_SIZE_T && tmp->v == sortind)
*sortindout=i;
- if(p->range)
- for(j=0;j<*nrange;++j)
- if(rangeindout[j]==GAL_BLANK_SIZE_T && tmp->v==rangeind[j])
- rangeindout[j]=i;
+ if(p->selection)
+ for(j=0;j<*nselect;++j)
+ if(selectindout[j]==GAL_BLANK_SIZE_T && tmp->v==selectind[j])
+ {
+ selectindout[j]=i;
+ selecttypeout[j]=selecttype[j];
+ }
++i;
}
- /* See if any of the necessary columns (for `--sort' and `--range')
- aren't requested as an output by the user. If there is any, such
- columns, keep them here. */
- if( p->sort && *sortindout==GAL_BLANK_SIZE_T )
- { *sortindout=allncols++; gal_list_str_add(&add, p->sort, 0); }
-
+ /* See if any of the sorting or selection columns aren't requested as an
+ output by the user. If there is, keep their new label.
- /* Note that the sorting and range may be requested on the same
+ Note that the sorting and range may be requested on the same
column. In this case, we don't want to read the same column twice. */
- if(p->range)
+ if( p->sort && *sortindout==GAL_BLANK_SIZE_T )
+ { *sortindout=allncols++; gal_list_str_add(&add, p->sort, 0); }
+ if(p->selection)
{
i=0;
- for(dtmp=p->range;dtmp!=NULL;dtmp=dtmp->next)
- {
- if(*sortindout!=GAL_BLANK_SIZE_T
- && rangeindout[i]==*sortindout)
- rangeindout[i]=*sortindout;
- else
- {
- if( rangeindout[i]==GAL_BLANK_SIZE_T )
- {
- rangeindout[i]=allncols++;
- gal_list_str_add(&add, dtmp->name, 0);
- }
- }
- ++i;
- }
+ for(k=0;k<SELECT_TYPE_NUMBER;++k)
+ for(dtmp=select[k];dtmp!=NULL;dtmp=dtmp->next)
+ {
+ if(*sortindout!=GAL_BLANK_SIZE_T && selectindout[i]==*sortindout)
+ {
+ selecttypeout[i]=k;
+ selectindout[i]=*sortindout;
+ }
+ else
+ {
+ if( selectindout[i]==GAL_BLANK_SIZE_T )
+ {
+ selecttypeout[i]=k;
+ selectindout[i]=allncols++;
+ gal_list_str_add(&add, dtmp->name, 0);
+ }
+ }
+ ++i;
+ }
}
- /* Add the possibly new set of columns to read. */
+ /* If any new (not requested by the user to output) columns must be read,
+ add them to the list of columns to read from the input file. */
if(add)
{
gal_list_str_reverse(&add);
@@ -838,8 +869,9 @@ ui_check_range_sort_before(struct tableparams *p,
gal_list_str_t *lines,
/* Clean up. */
- if(rangeind) free(rangeind);
gal_list_sizet_free(indexll);
+ if(selectind) free(selectind);
+ if(selecttype) free(selecttype);
gal_data_array_free(allcols, numcols, 0);
}
@@ -848,80 +880,72 @@ ui_check_range_sort_before(struct tableparams *p,
gal_list_str_t *lines,
static void
-ui_check_range_sort_after(struct tableparams *p, size_t nrange,
- size_t origoutncols, size_t sortindout,
- size_t *rangeindout)
+ui_check_select_sort_after(struct tableparams *p, size_t nselect,
+ size_t origoutncols, size_t sortindout,
+ size_t *selectindout, size_t *selecttypeout)
{
- struct list_range *rtmp;
- size_t i, j, *rangein=NULL;
- gal_data_t *tmp, *last=NULL;
+ size_t i, j;
+ struct list_select *rtmp;
+ gal_data_t *tmp, *origlast=NULL;
/* Allocate the necessary arrays. */
- if(p->range)
- {
- rangein=gal_pointer_allocate(GAL_TYPE_UINT8, nrange, 0,
- __func__, "rangein");
- p->freerange=gal_pointer_allocate(GAL_TYPE_UINT8, nrange, 1,
- __func__, "p->freerange");
- }
+ if(p->selection)
+ p->freeselect=gal_pointer_allocate(GAL_TYPE_UINT8, nselect, 1,
+ __func__, "p->freeselect");
- /* Set the proper pointers. For `rangecol' we'll need to do it separately
- (because the orders can get confused).*/
+ /* Set some necessary pointers (last pointer of actual output table and
+ pointer to the sort column). */
i=0;
for(tmp=p->table; tmp!=NULL; tmp=tmp->next)
{
- if(i==origoutncols-1) last=tmp;
+ if(i==origoutncols-1) origlast=tmp;
if(p->sort && i==sortindout) p->sortcol=tmp;
++i;
}
- /* Find the range columns. */
- for(i=0;i<nrange;++i)
+ /* Since we can have several selection columns, we'll treat them
+ differently. */
+ for(i=0;i<nselect;++i)
{
j=0;
for(tmp=p->table; tmp!=NULL; tmp=tmp->next)
{
- if(j==rangeindout[i])
+ if(j==selectindout[i])
{
- ui_list_range_add(&p->rangecol, tmp);
+ ui_list_select_add(&p->selectcol, tmp, selecttypeout[i]);
break;
}
++j;
}
}
- ui_list_range_reverse(&p->rangecol);
+ ui_list_select_reverse(&p->selectcol);
- /* Terminate the actual table where it should be terminated (by setting
- `last->next' to NULL. */
- last->next=NULL;
+ /* Terminate the desired output table where it should be terminated (by
+ setting `origlast->next' to NULL. */
+ origlast->next=NULL;
/* Also, remove any possibly existing `next' pointer for `sortcol' and
- `rangecol'. */
+ `selectcol'. */
if(p->sort && sortindout>=origoutncols)
{ p->sortcol->next=NULL; p->freesort=1; }
else p->sortin=1;
- if(p->range)
+ if(p->selection)
{
i=0;
- for(rtmp=p->rangecol;rtmp!=NULL;rtmp=rtmp->next)
+ for(rtmp=p->selectcol;rtmp!=NULL;rtmp=rtmp->next)
{
- if(rangeindout[i]>=origoutncols)
+ if(selectindout[i]>=origoutncols)
{
- rtmp->v->next=NULL;
- p->freerange[i] = (rtmp->v==p->sortcol) ? 0 : 1;
+ rtmp->col->next=NULL;
+ p->freeselect[i] = (rtmp->col==p->sortcol) ? 0 : 1;
}
- else rangein[i]=1;
++i;
}
}
-
-
- /* Clean up. */
- if(rangein) free(rangein);
}
@@ -934,9 +958,10 @@ ui_preparations(struct tableparams *p)
{
size_t *colmatch;
gal_list_str_t *lines;
- size_t nrange=0, origoutncols=0;
+ size_t nselect=0, origoutncols=0;
+ size_t sortindout=GAL_BLANK_SIZE_T;
struct gal_options_common_params *cp=&p->cp;
- size_t sortindout=GAL_BLANK_SIZE_T, *rangeindout=NULL;
+ size_t *selectindout=NULL, *selecttypeout=NULL;
/* If there were no columns specified or the user has asked for
information on the columns, we want the full set of columns. */
@@ -952,10 +977,14 @@ ui_preparations(struct tableparams *p)
lines=gal_options_check_stdin(p->filename, p->cp.stdintimeout, "input");
- /* If sort or range are given, see if we should read them also. */
- if(p->range || p->sort)
- ui_check_range_sort_before(p, lines, &nrange, &origoutncols, &sortindout,
- &rangeindout);
+ /* If any kind of row-selection is requested set `p->selection' to 1. */
+ p->selection = p->range || p->equal || p->notequal;
+
+ /* If row sorting or selection are requested, see if we should read any
+ extra columns. */
+ if(p->selection || p->sort)
+ ui_check_select_sort_before(p, lines, &nselect, &origoutncols, &sortindout,
+ &selectindout, &selecttypeout);
/* If we have any arithmetic operations, we need to make sure how many
@@ -975,11 +1004,11 @@ ui_preparations(struct tableparams *p)
gal_list_str_free(lines, 1);
- /* If the range and sort options are requested, keep them as separate
- datasets. */
- if(p->range || p->sort)
- ui_check_range_sort_after(p, nrange, origoutncols, sortindout,
- rangeindout);
+ /* If row sorting or selection are requested, keep them as separate
+ datasets.*/
+ if(p->selection || p->sort)
+ ui_check_select_sort_after(p, nselect, origoutncols, sortindout,
+ selectindout, selecttypeout);
/* If there was no actual data in the file, then inform the user and
@@ -1018,7 +1047,8 @@ ui_preparations(struct tableparams *p)
/* Clean up. */
free(colmatch);
- if(rangeindout) free(rangeindout);
+ if(selectindout) free(selectindout);
+ if(selecttypeout) free(selecttypeout);
}
diff --git a/bin/table/ui.h b/bin/table/ui.h
index 37f61a3..7af1d1c 100644
--- a/bin/table/ui.h
+++ b/bin/table/ui.h
@@ -30,9 +30,18 @@ along with Gnuastro. If not, see
<http://www.gnu.org/licenses/>.
+/* Option groups particular to this program. */
+enum program_args_groups
+{
+ UI_GROUP_OUTROWS = GAL_OPTIONS_GROUP_AFTER_COMMON,
+};
+
+
+
+
/* Available letters for short options:
- a b d e f g j k l m n p t u v x y z
+ a b d f g j k l m p t u v x y z
A B C E G H J L O Q R X Y
*/
enum option_keys_enum
@@ -44,6 +53,8 @@ enum option_keys_enum
UI_KEY_INFORMATION = 'i',
UI_KEY_COLINFOINSTDOUT = 'O',
UI_KEY_RANGE = 'r',
+ UI_KEY_EQUAL = 'e',
+ UI_KEY_NOTEQUAL = 'n',
UI_KEY_SORT = 's',
UI_KEY_DESCENDING = 'd',
UI_KEY_HEAD = 'H',
@@ -61,7 +72,7 @@ void
ui_read_check_inputs_setup(int argc, char *argv[], struct tableparams *p);
void
-ui_list_range_free(struct list_range *list, int freevalue);
+ui_list_select_free(struct list_select *list, int freevalue);
void
ui_free_report(struct tableparams *p);
diff --git a/bin/warp/Makefile.am b/bin/warp/Makefile.am
index b7c0a7f..9759465 100644
--- a/bin/warp/Makefile.am
+++ b/bin/warp/Makefile.am
@@ -23,13 +23,17 @@
AM_LDFLAGS = -L\$(top_builddir)/lib
AM_CPPFLAGS = -I\$(top_srcdir)/bootstrapped/lib -I\$(top_srcdir)/lib
+if COND_NORPATH
+ MAYBE_NORPATH = $(CONFIG_LDADD)
+endif
## Program definition (name, linking, sources and headers)
bin_PROGRAMS = astwarp
## Reason for linking with `libgnu' described in `bin/TEMPLATE/Makefile.am'.
-astwarp_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro
+astwarp_LDADD = $(top_builddir)/bootstrapped/lib/libgnu.la -lgnuastro \
+ $(MAYBE_NORPATH)
astwarp_SOURCES = main.c ui.c warp.c
diff --git a/configure.ac b/configure.ac
index cd8f11f..d3b48c6 100644
--- a/configure.ac
+++ b/configure.ac
@@ -50,7 +50,7 @@ AC_CONFIG_MACRO_DIRS([bootstrapped/m4])
# Library version, see the GNU Libtool manual ("Library interface versions"
# section for the exact definition of each) for
-GAL_CURRENT=8
+GAL_CURRENT=9
GAL_REVISION=0
GAL_AGE=0
GAL_LT_VERSION="${GAL_CURRENT}:${GAL_REVISION}:${GAL_AGE}"
@@ -251,82 +251,6 @@ AC_MSG_RESULT( $path_warning )
-# GAL_LIBCHECK([$LIBNAME], [libname], [-lname])
-# ---------------------------------------------
-#
-# Custom macro to correct LIBS and LD_LIBRARY_PATH when necessary.
-# 1) LIB<NAME> output of AC_LIB_HAVE_LINKFLAGS (e.g., `LIBGSL').
-# 2) lib<name> of the library (e.g., `libgsl').
-# 3) Full list of libraries to link with (e.g., `-lgsl -lgslcblas').
-#
-# To understand this, have in mind that when the library can be linked in
-# the standard search path without conflict, the output of
-# `AC_LIB_HAVE_LINKFLAGS' is going to be the actual linking command (like
-# `-lgsl -lgslcblas'). Otherwise, it will actually return the absolute
-# shared library address (for example `/some/directory/libgsl.so').
-#
-# So when `AC_LIB_HAVE_LINKFLAGS' returns a string containing `lib<name>',
-# it has found the shared library and we must add its directory to
-# LD_LIBRARY_PATH.
-AC_DEFUN([GAL_LIBCHECK],
-[
- n=`AS_ECHO([$1]) | grep $2`;
- AS_IF([test "x$n" = x],
- [LIBS="$1 $LIBS"],
- [
- # Go through all the tokens of `AC_LIB_HAVE_LINKFLAGS'.
- for token in $1; do
- d="";
-
- # See if this token has `lib<name' in it. If it does, then
- # we'll have to extract the directory.
- a=`AS_ECHO([$token]) | grep $2`
- AS_IF([test "x$a" = x], [],
- [
- # Use `lib<name>' as a delimiter to extract the
- # library (the first token will be the library's
- # directory).
- n=`AS_ECHO([$token]) | sed 's/\/'$2'/ /g'`
- for b in $n; do d=$b; break; done
- ])
-
- # If a directory was found, then stop parsing tokens.
- AS_IF([test "x$d" = x], [], [break])
- done;
-
- # Add the necessary linking flags to LIBS with the proper `-L'
- # (when necessary) to find them.
- AS_IF( [ test "x$d" = x ],
- [LIBS="$3 $LIBS"],
- [LIBS="-L$d $3 $LIBS"] )
-
- # Add the directory to LD_LIBRARY_PATH (if necessary). Go
- # through all the directories in LD_LIBRARY_PATH and if the
- # library doesn't exist add it to the end.
- nldp="";
- exists=0;
- for i in `AS_ECHO(["$LD_LIBRARY_PATH"]) | sed "s/:/ /g"`; do
- AS_IF([test "x$d" = "x$i"],[exists=1])
- AS_IF([test "x$nldp" = x],[nldp=$i],[nldp="$nldp:$i"])
- done
-
- # If the directory doesn't already exist in LD_LIBRARY_PATH, then
- # add it.
- AS_IF([test $exists = 0],
- [
- nldp="$nldp:$d"
- AS_IF([test "x$ldlibpathnew" = x],
- [ldlibpathnew="$d"],
- [ldlibpathnew="$ldlibpathnew:$d"])
- ])
- LD_LIBRARY_PATH=$nldp
- ])
-])
-
-
-
-
-
# Libraries
# ---------
#
@@ -416,14 +340,19 @@ AS_IF([test "x$LIBWCS" = x],
[LDADD="$LTLIBWCS $LDADD"; LIBS="$LIBWCS $LIBS"])
-AC_LIB_HAVE_LINKFLAGS([jpeg], [], [
+AC_ARG_WITH([libjpeg],
+ [AS_HELP_STRING([--without-libjpeg],
+ [disable support for libjpeg])],
+ [], [with_libjpeg=yes])
+AS_IF([test "x$with_libjpeg" != xno],
+ [ AC_LIB_HAVE_LINKFLAGS([jpeg], [], [
#include <stdio.h>
#include <stdlib.h>
#include <jpeglib.h>
void junk(void) {
struct jpeg_decompress_struct cinfo;
jpeg_create_decompress(&cinfo);
-} ])
+} ]) ])
AS_IF([test "x$LIBJPEG" = x],
[missing_optional_lib=yes; has_libjpeg=no; anywarnings=yes],
[LDADD="$LTLIBJPEG $LDADD"; LIBS="$LIBJPEG $LIBS"])
@@ -435,14 +364,18 @@ AM_CONDITIONAL([COND_HASLIBJPEG], [test "x$has_libjpeg" =
"xyes"])
# the LZMA library. But if libtiff hasn't been linked with it and its
# present, there is no problem, the linker will just pass over it. So we
# don't need to stop the build if this fails.
-AC_LIB_HAVE_LINKFLAGS([lzma], [], [#include <lzma.h>])
+AC_ARG_WITH([libtiff],
+ [AS_HELP_STRING([--without-libtiff],
+ [disable support for libtiff])],
+ [], [with_libtiff=yes])
+AS_IF([test "x$with_libtiff" != xno],
+ [ AC_LIB_HAVE_LINKFLAGS([lzma], [], [#include <lzma.h>])
+ AC_LIB_HAVE_LINKFLAGS([tiff], [], [
+#include <tiffio.h>
+void junk(void) {TIFF *tif=TIFFOpen("junk", "r");} ])
+ ])
AS_IF([test "x$LIBLZMA" = x], [],
[LDADD="$LTLIBLZMA $LDADD"; LIBS="$LIBLZMA $LIBS"])
-
-AC_LIB_HAVE_LINKFLAGS([tiff], [], [
-#include <tiffio.h>
-void junk(void) {TIFF *tif=TIFFOpen("junk", "r");}
-])
AS_IF([test "x$LIBTIFF" = x],
[missing_optional_lib=yes; has_libtiff=no; anywarnings=yes],
[LDADD="$LTLIBTIFF $LDADD"; LIBS="$LIBTIFF $LIBS"])
@@ -451,10 +384,15 @@ AM_CONDITIONAL([COND_HASLIBTIFF], [test "x$has_libtiff" =
"xyes"])
# Check libgit2. Note that very old versions of libgit2 don't have the
# `git_libgit2_init' function.
-AC_LIB_HAVE_LINKFLAGS([git2], [], [
+AC_ARG_WITH([libgit2],
+ [AS_HELP_STRING([--without-libgit2],
+ [disable support for libgit2])],
+ [], [with_libgit2=yes])
+AS_IF([test "x$with_libgit2" != xno],
+ [ AC_LIB_HAVE_LINKFLAGS([git2], [], [
#include <git2.h>
-void junk(void) {git_libgit2_init();}
-])
+void junk(void) {git_libgit2_init();} ])
+ ])
AS_IF([test "x$LIBGIT2" = x],
[missing_optional_lib=yes; has_libgit2=0],
[LDADD="$LTLIBGIT2 $LDADD"; LIBS="$LIBGIT2 $LIBS"])
@@ -945,6 +883,7 @@ AM_CONDITIONAL([COND_WARP], [test $enable_warp =
yes])
# linking flags and put them in the Makefiles.
LIBS="$orig_LIBS"
AC_SUBST(CONFIG_LDADD, [$LDADD])
+AM_CONDITIONAL([COND_NORPATH], [test "x$enable_rpath" = "xno"])
AS_ECHO(["linking flags (LDADD) ... $LDADD"])
@@ -1042,23 +981,26 @@ AS_IF([test x$enable_guide_message = xyes],
AS_IF([test "x$has_libjpeg" = "xno"],
[dependency_notice=yes
AS_ECHO([" - libjpeg (http://ijg.org), could not be linked
with in your library"])
- AS_ECHO([" search path. If JPEG inputs/outputs are
requested, the respective"])
- AS_ECHO([" tool will inform you and abort with an error."])
+ AS_ECHO([" search path, or is manually disabled. If JPEG
inputs/outputs are"])
+ AS_ECHO([" requested, the respective tool will inform you
and abort with an"])
+ AS_ECHO([" error."])
AS_ECHO([]) ])
AS_IF([test "x$has_libtiff" = "xno"],
[dependency_notice=yes
AS_ECHO([" - libtiff (http://libtiff.maptools.org), could not
be linked with in"])
- AS_ECHO([" your library search path. If TIFF inputs/outputs
are requested, the"])
- AS_ECHO([" respective tool will inform you and abort with an
error."])
+ AS_ECHO([" your library search path, or is manually
disabled. If TIFF"])
+ AS_ECHO([" inputs/outputs are requested, the respective tool
will inform"])
+ AS_ECHO([" you and abort with an error."])
AS_ECHO([]) ])
AS_IF([test "x$has_libgit2" = "x0"],
[dependency_notice=yes
AS_ECHO([" - libgit2 (https://libgit2.org), could not be
linked with in your"])
- AS_ECHO([" library search path. When present, Git's describe
output will be"])
- AS_ECHO([" stored in the output files if Gnuastro's programs
were called"])
- AS_ECHO([" within a Gitversion controlled directory to help
in reproducibility."])
+ AS_ECHO([" library search path, or is manually disabled.
When present, Git's"])
+ AS_ECHO([" describe output will be stored in the output
files if Gnuastro's"])
+ AS_ECHO([" programs were called within a Gitversion
controlled directory to"])
+ AS_ECHO([" help in reproducibility."])
AS_ECHO([]) ])
AS_IF([test "x$usable_libtool" = "xno"],
@@ -1121,23 +1063,6 @@ AS_IF([test x$enable_guide_message = xyes],
AS_ECHO([" installing Gnuastro to learn more about PATH:"])
AS_ECHO([" $ info gnuastro \"Installation directory\""])
AS_ECHO([]) ])
-
- # Notice about run-time linking.
- AS_IF([test "x$nldpath" = x], [],
- [AS_ECHO([" - After installation, to run Gnuastro's programs,
your run-time"])
- AS_ECHO([" link path (LD_LIBRARY_PATH) needs to contain the
following "])
- AS_ECHO([" directory(s):"])
- AS_ECHO([" $nldpath"])
- AS_ECHO([" If there is more than one directory, they are
separated with a"])
- AS_ECHO([" colon (':'). You can check the current value
with:"])
- AS_ECHO([" echo \$LD_LIBRARY_PATH"])
- AS_ECHO([" If not present, add this line in your shell's
startup script"])
- AS_ECHO([" (for example '~/.bashrc'):"])
- AS_ECHO([" export
LD_LIBRARY_PATH=\"\$LD_LIBRARY_PATH:$nldpath\""])
- AS_ECHO([" This worning won't cause any problems during the
rest of Gnuastro's"])
- AS_ECHO([" build and installation. But you'll need it later,
when you are using"])
- AS_ECHO([" Gnuastro."])
- AS_ECHO([]) ])
]
)
AS_ECHO(["To build Gnuastro $PACKAGE_VERSION, please run:"])
diff --git a/doc/announce-acknowledge.txt b/doc/announce-acknowledge.txt
index 241dcec..1f9dc27 100644
--- a/doc/announce-acknowledge.txt
+++ b/doc/announce-acknowledge.txt
@@ -1,21 +1,6 @@
Alphabetically ordered list to acknowledge in the next release.
-Hamed Altafi
-Roberto Baena Gallé
-Zahra Bagheri
-Leindert Boogaard
-Bruno Haible
-Raul Infante-Sainz
-Lee Kelvin
-Elham Saremi
-Zahra Sharbaf
-David Valls-Gabaud
-Michael Wilkinson
-
-
-
-
-
+Raúl Infante Sainz
diff --git a/doc/genauthors b/doc/genauthors
index 078fbf1..42806e9 100755
--- a/doc/genauthors
+++ b/doc/genauthors
@@ -44,19 +44,27 @@ if [ -d $1/.git ]; then
# directory. The original `.mailmap' is in the `TOP_SRCDIR', so even
# when the source and build directories are the same, there is no
# problem.
- ln -s $1/.mailmap .mailmap
+ #
+ # But in case `.mailmap' already exists (for example the script is run
+ # in the top source directory not from the `doc' directory, or if a
+ # symbolic link was already created), we won't do any copying.
+ if [ -e .mailmap ]; then keepmailmap=1;
+ else keepmailmap=0; ln -s $1/.mailmap .mailmap;
+ fi
# Do NOT test if authors.texi is newer than ../.git. In some cases the
# list of authors is created empty when running make in top directory
# (in particular "make -jN" with N > 1), so authors.texi needs to be
# recreated anyway.
git --git-dir=$1/.git shortlog --numbered --summary --email --no-merges \
- | sed -e 's/</ /' -e 's/>/ /' -e 's/@/@@/' -e "s/è/@\`e/" \
- | awk '{printf "%s %s (%s, %s)@*\n", $2, $3, $4, $1}' \
+ | sed -e 's/</ /' -e 's/>/ /' -e 's/@/@@/' \
+ -e "s/è/@\`e/" -e "s/é/@\'e/" \
+ | awk '{for(i=2;i<NF;++i) printf("%s ", $i); \
+ printf("(%s, %s)@*\n", $NF, $1)}' \
> $1/doc/authors.texi
- # Clean up:
- rm .mailmap
+ # Clean up (if necessary)
+ if [ $keepmailmap = 0 ]; then rm .mailmap; fi
# Check if the authors.texi file was actually written:
if [ ! -s $1/doc/authors.texi ]; then
diff --git a/doc/gnuastro.en.html b/doc/gnuastro.en.html
index eba3998..5da54f4 100644
--- a/doc/gnuastro.en.html
+++ b/doc/gnuastro.en.html
@@ -85,9 +85,9 @@ for entertaining and easy to read real world examples of using
<p>
The current stable release
- is <a href="http://ftp.gnu.org/gnu/gnuastro/gnuastro-0.9.tar.gz">Gnuastro
- 0.9</a> (April 17th, 2019).
- Use <a href="http://ftpmirror.gnu.org/gnuastro/gnuastro-0.9.tar.gz">a
+ is <a href="http://ftp.gnu.org/gnu/gnuastro/gnuastro-0.10.tar.gz">Gnuastro
+ 0.10</a> (August 3rd, 2019).
+ Use <a href="http://ftpmirror.gnu.org/gnuastro/gnuastro-0.10.tar.gz">a
mirror</a> if possible.
<!-- Comment the test release notice when the test release is not more
@@ -98,7 +98,7 @@ for entertaining and easy to read real world examples of using
To stay up to date, please subscribe.</p>
<p>For details of the significant changes in this release, please see the
- <a
href="https://git.savannah.gnu.org/cgit/gnuastro.git/plain/NEWS?id=gnuastro_v0.9">NEWS</a>
+ <a
href="https://git.savannah.gnu.org/cgit/gnuastro.git/plain/NEWS?id=gnuastro_v0.10">NEWS</a>
file.</p>
<p>The
diff --git a/doc/gnuastro.fr.html b/doc/gnuastro.fr.html
index ee9f654..8ba8ceb 100644
--- a/doc/gnuastro.fr.html
+++ b/doc/gnuastro.fr.html
@@ -85,15 +85,15 @@ h3 { clear: both; }
<h3 id="download">Téléchargement</h3>
<p>La version stable actuelle
- est <a href="https://ftp.gnu.org/gnu/gnuastro/gnuastro-0.9.tar.gz">Gnuastro
- 0.9</a> (sortie le 28 avril
- 2019). Utilisez <a
href="https://ftpmirror.gnu.org/gnuastro/gnuastro-0.9.tar.gz">un
+ est <a href="https://ftp.gnu.org/gnu/gnuastro/gnuastro-0.10.tar.gz">Gnuastro
+ 0.10</a> (sortie le 3 août
+ 2019). Utilisez <a
href="https://ftpmirror.gnu.org/gnuastro/gnuastro-0.10.tar.gz">un
miroir</a> si possible. <br />Les nouvelles publications sont annoncées
sur <a
href="https://lists.gnu.org/mailman/listinfo/info-gnuastro">info-gnuastro</a>.
Abonnez-vous pour rester au courant.</p>
<p>Les changements importants sont décrits dans le
- fichier <a
href="https://git.savannah.gnu.org/cgit/gnuastro.git/plain/NEWS?id=gnuastro_v0.9">
+ fichier <a
href="https://git.savannah.gnu.org/cgit/gnuastro.git/plain/NEWS?id=gnuastro_v0.10">
NEWS</a>.</p>
<p>Le lien
diff --git a/doc/gnuastro.texi b/doc/gnuastro.texi
index cdba66c..fd62f19 100644
--- a/doc/gnuastro.texi
+++ b/doc/gnuastro.texi
@@ -1,4 +1,12 @@
\input texinfo @c -*-texinfo-*-
+
+@c ONE SENTENCE PER LINE
+@c ---------------------
+@c For main printed text in this file, to allow easy tracking of history
+@c with Git, we are following a one-sentence-per-line convention.
+@c
+@c Since the manual is long, this is being done gradually from the start.
+
@c %**start of header
@setfilename gnuastro.info
@settitle GNU Astronomy Utilities
@@ -25,19 +33,14 @@
@c Copyright information:
@copying
-This book documents version @value{VERSION} of the GNU Astronomy Utilities
-(Gnuastro). Gnuastro provides various programs and libraries for
-astronomical data manipulation and analysis.
+This book documents version @value{VERSION} of the GNU Astronomy Utilities
(Gnuastro).
+Gnuastro provides various programs and libraries for astronomical data
manipulation and analysis.
Copyright @copyright{} 2015-2019 Free Software Foundation, Inc.
@quotation
-Permission is granted to copy, distribute and/or modify this document under
-the terms of the GNU Free Documentation License, Version 1.3 or any later
-version published by the Free Software Foundation; with no Invariant
-Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the
-license is included in the section entitled ``GNU Free Documentation
-License''.
+Permission is granted to copy, distribute and/or modify this document under
the terms of the GNU Free Documentation License, Version 1.3 or any later
version published by the Free Software Foundation; with no Invariant Sections,
no Front-Cover Texts, and no Back-Cover Texts.
+A copy of the license is included in the section entitled ``GNU Free
Documentation License''.
@end quotation
@end copying
@@ -149,15 +152,8 @@ commits):
@*
@*
@*
-For myself, I am interested in science and in philosophy only because I
-want to learn something about the riddle of the world in which we live, and
-the riddle of man's knowledge of that world. And I believe that only a
-revival of interest in these riddles can save the sciences and philosophy
-from narrow specialization and from an obscurantist faith in the expert's
-special skill, and in his personal knowledge and authority; a faith that so
-well fits our `post-rationalist' and `post-critical' age, proudly dedicated
-to the destruction of the tradition of rational philosophy, and of rational
-thought itself.
+For myself, I am interested in science and in philosophy only because I want
to learn something about the riddle of the world in which we live, and the
riddle of man's knowledge of that world.
+And I believe that only a revival of interest in these riddles can save the
sciences and philosophy from narrow specialization and from an obscurantist
faith in the expert's special skill, and in his personal knowledge and
authority; a faith that so well fits our `post-rationalist' and `post-critical'
age, proudly dedicated to the destruction of the tradition of rational
philosophy, and of rational thought itself.
@author Karl Popper. The logic of scientific discovery. 1959.
@end quotation
@@ -184,13 +180,9 @@ thought itself.
@insertcopying
@ifhtml
-To navigate easily in this web page, you can use the @code{Next},
-@code{Previous}, @code{Up} and @code{Contents} links in the top and
-bottom of each page. @code{Next} and @code{Previous} will take you to
-the next or previous topic in the same level, for example from chapter
-1 to chapter 2 or vice versa. To go to the sections or subsections,
-you have to click on the menu entries that are there when ever a
-sub-component to a title is present.
+To navigate easily in this web page, you can use the @code{Next},
@code{Previous}, @code{Up} and @code{Contents} links in the top and bottom of
each page.
+@code{Next} and @code{Previous} will take you to the next or previous topic in
the same level, for example from chapter 1 to chapter 2 or vice versa.
+To go to the sections or subsections, you have to click on the menu entries
that are there when ever a sub-component to a title is present.
@end ifhtml
@end ifnottex
@@ -264,7 +256,7 @@ General program usage tutorial
* NoiseChisel optimization for storage:: Dramatically decrease output's
volume.
* Segmentation and making a catalog:: Finding true peaks and creating a
catalog.
* Working with catalogs estimating colors:: Estimating colors using the
catalogs.
-* Aperture photomery:: Doing photometry on a fixed aperture.
+* Aperture photometry:: Doing photometry on a fixed aperture.
* Finding reddest clumps and visual inspection:: Selecting some targets and
inspecting them.
* Citing and acknowledging Gnuastro:: How to cite and acknowledge Gnuastro in
your papers.
@@ -629,7 +621,7 @@ Gnuastro library
* Convolution functions:: Library functions to do convolution.
* Interpolation:: Interpolate (over blank values possibly).
* Git wrappers:: Wrappers for functions in libgit2.
-* Spectral lines library::
+* Spectral lines library:: Functions for operating on Spectral lines.
* Cosmology library:: Cosmological calculations.
Multithreaded programming (@file{threads.h})
@@ -728,34 +720,19 @@ SAO ds9
@cindex GNU coding standards
@cindex GNU Astronomy Utilities (Gnuastro)
-GNU Astronomy Utilities (Gnuastro) is an official GNU package consisting of
-separate programs and libraries for the manipulation and analysis of
-astronomical data. All the programs share the same basic command-line user
-interface for the comfort of both the users and developers. Gnuastro is
-written to comply fully with the GNU coding standards so it integrates
-finely with the GNU/Linux operating system. This also enables astronomers
-to expect a fully familiar experience in the source code, building,
-installing and command-line user interaction that they have seen in all the
-other GNU software that they use. The official and always up to date
-version of this book (or manual) is freely available under @ref{GNU Free
-Doc. License} in various formats (PDF, HTML, plain text, info, and as its
-Texinfo source) at @url{http://www.gnu.org/software/gnuastro/manual/}.
-
-For users who are new to the GNU/Linux environment, unless otherwise
-specified most of the topics in @ref{Installation} and @ref{Common program
-behavior} are common to all GNU software, for example installation,
-managing command-line options or getting help (also see @ref{New to
-GNU/Linux?}). So if you are new to this empowering environment, we
-encourage you to go through these chapters carefully. They can be a
-starting point from which you can continue to learn more from each
-program's own manual and fully benefit from and enjoy this wonderful
-environment. Gnuastro also comes with a large set of libraries, so you can
-write your own programs using Gnuastro's building blocks, see @ref{Review
-of library fundamentals} for an introduction.
-
-In Gnuastro, no change to any program or library will be committed to its
-history, before it has been fully documented here first. As discussed in
-@ref{Science and its tools} this is a founding principle of the Gnuastro.
+GNU Astronomy Utilities (Gnuastro) is an official GNU package consisting of
separate programs and libraries for the manipulation and analysis of
astronomical data.
+All the programs share the same basic command-line user interface for the
comfort of both the users and developers.
+Gnuastro is written to comply fully with the GNU coding standards so it
integrates finely with the GNU/Linux operating system.
+This also enables astronomers to expect a fully familiar experience in the
source code, building, installing and command-line user interaction that they
have seen in all the other GNU software that they use.
+The official and always up to date version of this book (or manual) is freely
available under @ref{GNU Free Doc. License} in various formats (PDF, HTML,
plain text, info, and as its Texinfo source) at
@url{http://www.gnu.org/software/gnuastro/manual/}.
+
+For users who are new to the GNU/Linux environment, unless otherwise specified
most of the topics in @ref{Installation} and @ref{Common program behavior} are
common to all GNU software, for example installation, managing command-line
options or getting help (also see @ref{New to GNU/Linux?}).
+So if you are new to this empowering environment, we encourage you to go
through these chapters carefully.
+They can be a starting point from which you can continue to learn more from
each program's own manual and fully benefit from and enjoy this wonderful
environment.
+Gnuastro also comes with a large set of libraries, so you can write your own
programs using Gnuastro's building blocks, see @ref{Review of library
fundamentals} for an introduction.
+
+In Gnuastro, no change to any program or library will be committed to its
history, before it has been fully documented here first.
+As discussed in @ref{Science and its tools} this is a founding principle of
the Gnuastro.
@menu
* Quick start:: A quick start to installation.
@@ -783,32 +760,15 @@ history, before it has been fully documented here first.
As discussed in
@cindex GNU Tar
@cindex Uncompress source
@cindex Source, uncompress
-The latest official release tarball is always available as
-@url{http://ftp.gnu.org/gnu/gnuastro/gnuastro-latest.tar.gz,
-@file{gnuastro-latest.tar.gz}}. For better compression (faster download),
-and robust archival features, an @url{http://www.nongnu.org/lzip/lzip.html,
-Lzip} compressed tarball is also available at
-@url{http://ftp.gnu.org/gnu/gnuastro/gnuastro-latest.tar.lz,
-@file{gnuastro-latest.tar.lz}}, see @ref{Release tarball} for more details
-on the tarball release@footnote{The Gzip library and program are commonly
-available on most systems. However, Gnuastro recommends Lzip as described
-above and the beta-releases are also only distributed in @file{tar.lz}. You
-can download and install Lzip's source (in @file{.tar.gz} format) from its
-webpage and follow the same process as below: Lzip has no dependencies, so
-simply decompress, then run @command{./configure}, @command{make},
-@command{sudo make install}.}.
-
-
-Let's assume the downloaded tarball is in the @file{TOPGNUASTRO}
-directory. The first two commands below can be used to decompress the
-source. If you download @file{tar.lz} and your Tar implementation doesn't
-recognize Lzip (the second command fails), run the third and fourth
-lines@footnote{In case Tar doesn't directly uncompress your @file{.tar.lz}
-tarball, you can merge the separate calls to Lzip and Tar (shown in the
-main body of text) into one command by directly piping the output of Lzip
-into Tar with a command like this: @command{$ lzip -cd gnuastro-0.5.tar.lz
-| tar -xf -}}. Note that lines starting with @code{##} don't need to be
-typed.
+The latest official release tarball is always available as
@url{http://ftp.gnu.org/gnu/gnuastro/gnuastro-latest.tar.gz,
@file{gnuastro-latest.tar.gz}}.
+For better compression (faster download), and robust archival features, an
@url{http://www.nongnu.org/lzip/lzip.html, Lzip} compressed tarball is also
available at @url{http://ftp.gnu.org/gnu/gnuastro/gnuastro-latest.tar.lz,
@file{gnuastro-latest.tar.lz}}, see @ref{Release tarball} for more details on
the tarball release@footnote{The Gzip library and program are commonly
available on most systems.
+However, Gnuastro recommends Lzip as described above and the beta-releases are
also only distributed in @file{tar.lz}.
+You can download and install Lzip's source (in @file{.tar.gz} format) from its
webpage and follow the same process as below: Lzip has no dependencies, so
simply decompress, then run @command{./configure}, @command{make},
@command{sudo make install}.}.
+
+Let's assume the downloaded tarball is in the @file{TOPGNUASTRO} directory.
+The first two commands below can be used to decompress the source.
+If you download @file{tar.lz} and your Tar implementation doesn't recognize
Lzip (the second command fails), run the third and fourth lines@footnote{In
case Tar doesn't directly uncompress your @file{.tar.lz} tarball, you can merge
the separate calls to Lzip and Tar (shown in the main body of text) into one
command by directly piping the output of Lzip into Tar with a command like
this: @command{$ lzip -cd gnuastro-0.5.tar.lz | tar -xf -}}.
+Note that lines starting with @code{##} don't need to be typed.
@example
## Go into the download directory.
@@ -822,13 +782,9 @@ $ lzip -d gnuastro-latest.tar.lz
$ tar xf gnuastro-latest.tar
@end example
-Gnuastro has three mandatory dependencies and some optional dependencies
-for extra functionality, see @ref{Dependencies} for the full list. In
-@ref{Dependencies from package managers} we have prepared the command to
-easily install Gnuastro's dependencies using the package manager of some
-operating systems. When the mandatory dependencies are ready, you can
-configure, compile, check and install Gnuastro on your system with the
-following commands.
+Gnuastro has three mandatory dependencies and some optional dependencies for
extra functionality, see @ref{Dependencies} for the full list.
+In @ref{Dependencies from package managers} we have prepared the command to
easily install Gnuastro's dependencies using the package manager of some
operating systems.
+When the mandatory dependencies are ready, you can configure, compile, check
and install Gnuastro on your system with the following commands.
@example
$ cd gnuastro-X.X # Replace X.X with version number.
@@ -839,16 +795,12 @@ $ sudo make install
@end example
@noindent
-See @ref{Known issues} if you confront any complications. For each program
-there is an `Invoke ProgramName' sub-section in this book which explains
-how the programs should be run on the command-line (for example
-@ref{Invoking asttable}). You can read the same section on the command-line
-by running @command{$ info astprogname} (for example @command{info
-asttable}). The `Invoke ProgramName' sub-section starts with a few examples
-of each program and goes on to explain the invocation details. See
-@ref{Getting help} for all the options you have to get help. In
-@ref{Tutorials} some real life examples of how these programs might be used
-are given.
+See @ref{Known issues} if you confront any complications.
+For each program there is an `Invoke ProgramName' sub-section in this book
which explains how the programs should be run on the command-line (for example
@ref{Invoking asttable}).
+You can read the same section on the command-line by running @command{$ info
astprogname} (for example @command{info asttable}).
+The `Invoke ProgramName' sub-section starts with a few examples of each
program and goes on to explain the invocation details.
+See @ref{Getting help} for all the options you have to get help.
+In @ref{Tutorials} some real life examples of how these programs might be used
are given.
@@ -860,136 +812,75 @@ are given.
@node Science and its tools, Your rights, Quick start, Introduction
@section Science and its tools
-History of science indicates that there are always inevitably unseen
-faults, hidden assumptions, simplifications and approximations in all
-our theoretical models, data acquisition and analysis techniques. It
-is precisely these that will ultimately allow future generations to
-advance the existing experimental and theoretical knowledge through
-their new solutions and corrections.
-
-In the past, scientists would gather data and process them individually to
-achieve an analysis thus having a much more intricate knowledge of the data
-and analysis. The theoretical models also required little (if any)
-simulations to compare with the data. Today both methods are becoming
-increasingly more dependent on pre-written software. Scientists are
-dissociating themselves from the intricacies of reducing raw observational
-data in experimentation or from bringing the theoretical models to life in
-simulations. These `intricacies' are precisely those unseen faults, hidden
-assumptions, simplifications and approximations that define scientific
-progress.
+History of science indicates that there are always inevitably unseen faults,
hidden assumptions, simplifications and approximations in all our theoretical
models, data acquisition and analysis techniques.
+It is precisely these that will ultimately allow future generations to advance
the existing experimental and theoretical knowledge through their new solutions
and corrections.
+
+In the past, scientists would gather data and process them individually to
achieve an analysis thus having a much more intricate knowledge of the data and
analysis.
+The theoretical models also required little (if any) simulations to compare
with the data.
+Today both methods are becoming increasingly more dependent on pre-written
software.
+Scientists are dissociating themselves from the intricacies of reducing raw
observational data in experimentation or from bringing the theoretical models
to life in simulations.
+These `intricacies' are precisely those unseen faults, hidden assumptions,
simplifications and approximations that define scientific progress.
@quotation
@cindex Anscombe F. J.
-Unfortunately, most persons who have recourse to a computer for
-statistical analysis of data are not much interested either in
-computer programming or in statistical method, being primarily
-concerned with their own proper business. Hence the common use of
-library programs and various statistical packages. ... It's time that
-was changed.
-@author F. J. Anscombe. The American Statistician, Vol. 27, No. 1. 1973
+Unfortunately, most persons who have recourse to a computer for statistical
analysis of data are not much interested either in computer programming or in
statistical method, being primarily concerned with their own proper business.
+Hence the common use of library programs and various statistical packages. ...
It's time that was changed.
+@author F.J. Anscombe. The American Statistician, Vol. 27, No. 1. 1973
@end quotation
@cindex Anscombe's quartet
@cindex Statistical analysis
-@url{http://en.wikipedia.org/wiki/Anscombe%27s_quartet,Anscombe's quartet}
-demonstrates how four data sets with widely different shapes (when plotted)
-give nearly identical output from standard regression techniques. Anscombe
-uses this (now famous) quartet, which was introduced in the paper quoted
-above, to argue that ``@emph{Good statistical analysis is not a purely
-routine matter, and generally calls for more than one pass through the
-computer}''. Echoing Anscombe's concern after 44 years, some of the highly
-recognized statisticians of our time (Leek, McShane, Gelman, Colquhoun,
-Nuijten and Goodman), wrote in Nature that:
+@url{http://en.wikipedia.org/wiki/Anscombe%27s_quartet,Anscombe's quartet}
demonstrates how four data sets with widely different shapes (when plotted)
give nearly identical output from standard regression techniques.
+Anscombe uses this (now famous) quartet, which was introduced in the paper
quoted above, to argue that ``@emph{Good statistical analysis is not a purely
routine matter, and generally calls for more than one pass through the
computer}''.
+Echoing Anscombe's concern after 44 years, some of the highly recognized
statisticians of our time (Leek, McShane, Gelman, Colquhoun, Nuijten and
Goodman), wrote in Nature that:
@quotation
-We need to appreciate that data analysis is not purely computational and
-algorithmic — it is a human behaviour....Researchers who hunt hard enough
-will turn up a result that fits statistical criteria — but their discovery
-will probably be a false positive.
+We need to appreciate that data analysis is not purely computational and
algorithmic -- it is a human behaviour....Researchers who hunt hard enough will
turn up a result that fits statistical criteria -- but their discovery will
probably be a false positive.
@author Five ways to fix statistics, Nature, 551, Nov 2017.
@end quotation
-Users of statistical (scientific) methods (software) are therefore not
-passive (objective) agents in their result. Therefore, it is necessary to
-actually understand the method, not just use it as a black box. The
-subjective experience gained by frequently using a method/software is not
-sufficient to claim an understanding of how the tool/method works and how
-relevant it is to the data and analysis. This kind of subjective experience
-is prone to serious misunderstandings about the data, what the
-software/statistical-method really does (especially as it gets more
-complicated), and thus the scientific interpretation of the result. This
-attitude is further encouraged through non-free
-software@footnote{@url{https://www.gnu.org/philosophy/free-sw.html}},
-poorly written (or non-existent) scientific software manuals, and
-non-reproducible papers@footnote{Where the authors omit many of the
-analysis/processing ``details'' from the paper by arguing that they would
-make the paper too long/unreadable. However, software engineers have been
-dealing with such issues for a long time. There are thus software
-management solutions that allow us to supplement papers with all the
-details necessary to exactly reproduce the result. For example see
-@url{https://doi.org/10.5281/zenodo.1163746, zenodo.1163746} and
-@url{https://doi.org/10.5281/zenodo.1164774, zenodo.1164774} and this @url{
-http://akhlaghi.org/reproducible-science.html, general discussion}.}. This
-approach to scientific software and methods only helps in producing dogmas
-and an ``@emph{obscurantist faith in the expert's special skill, and in his
-personal knowledge and authority}''@footnote{Karl Popper. The logic of
-scientific discovery. 1959. Larger quote is given at the start of the PDF
-(for print) version of this book.}.
+Users of statistical (scientific) methods (software) are therefore not passive
(objective) agents in their result.
+Therefore, it is necessary to actually understand the method, not just use it
as a black box.
+The subjective experience gained by frequently using a method/software is not
sufficient to claim an understanding of how the tool/method works and how
relevant it is to the data and analysis.
+This kind of subjective experience is prone to serious misunderstandings about
the data, what the software/statistical-method really does (especially as it
gets more complicated), and thus the scientific interpretation of the result.
+This attitude is further encouraged through non-free
software@footnote{@url{https://www.gnu.org/philosophy/free-sw.html}}, poorly
written (or non-existent) scientific software manuals, and non-reproducible
papers@footnote{Where the authors omit many of the analysis/processing
``details'' from the paper by arguing that they would make the paper too
long/unreadable.
+However, software engineers have been dealing with such issues for a long time.
+There are thus software management solutions that allow us to supplement
papers with all the details necessary to exactly reproduce the result.
+For example see @url{https://doi.org/10.5281/zenodo.1163746, zenodo.1163746}
and @url{https://doi.org/10.5281/zenodo.1164774, zenodo.1164774} and this @url{
http://akhlaghi.org/reproducible-science.html, general discussion}.}.
+This approach to scientific software and methods only helps in producing
dogmas and an ``@emph{obscurantist faith in the expert's special skill, and in
his personal knowledge and authority}''@footnote{Karl Popper. The logic of
scientific discovery. 1959.
+Larger quote is given at the start of the PDF (for print) version of this
book.}.
@quotation
@cindex Douglas Rushkoff
-Program or be programmed. Choose the former, and you gain access to
-the control panel of civilization. Choose the latter, and it could be
-the last real choice you get to make.
+Program or be programmed.
+Choose the former, and you gain access to the control panel of civilization.
+Choose the latter, and it could be the last real choice you get to make.
@author Douglas Rushkoff. Program or be programmed, O/R Books (2010).
@end quotation
-It is obviously impractical for any one human being to gain the intricate
-knowledge explained above for every step of an analysis. On the other hand,
-scientific data can be large and numerous, for example images produced by
-telescopes in astronomy. This requires efficient algorithms. To make things
-worse, natural scientists have generally not been trained in the advanced
-software techniques, paradigms and architecture that are taught in computer
-science or engineering courses and thus used in most software. The GNU
-Astronomy Utilities are an effort to tackle this issue.
-
-Gnuastro is not just a software, this book is as important to the idea
-behind Gnuastro as the source code (software). This book has tried to learn
-from the success of the ``Numerical Recipes'' book in educating those who
-are not software engineers and computer scientists but still heavy users of
-computational algorithms, like astronomers. There are two major
-differences.
-
-The first difference is that Gnuastro's code and the background information
-are segregated: the code is moved within the actual Gnuastro software
-source code and the underlying explanations are given here in this book. In
-the source code, every non-trivial step is heavily commented and correlated
-with this book, it follows the same logic of this book, and all the
-programs follow a similar internal data, function and file structure, see
-@ref{Program source}. Complementing the code, this book focuses on
-thoroughly explaining the concepts behind those codes (history,
-mathematics, science, software and usage advise when necessary) along with
-detailed instructions on how to run the programs. At the expense of
-frustrating ``professionals'' or ``experts'', this book and the comments in
-the code also intentionally avoid jargon and abbreviations. The source code
-and this book are thus intimately linked, and when considered as a single
-entity can be thought of as a real (an actual software accompanying the
-algorithms) ``Numerical Recipes'' for astronomy.
+It is obviously impractical for any one human being to gain the intricate
knowledge explained above for every step of an analysis.
+On the other hand, scientific data can be large and numerous, for example
images produced by telescopes in astronomy.
+This requires efficient algorithms.
+To make things worse, natural scientists have generally not been trained in
the advanced software techniques, paradigms and architecture that are taught in
computer science or engineering courses and thus used in most software.
+The GNU Astronomy Utilities are an effort to tackle this issue.
+
+Gnuastro is not just a software, this book is as important to the idea behind
Gnuastro as the source code (software).
+This book has tried to learn from the success of the ``Numerical Recipes''
book in educating those who are not software engineers and computer scientists
but still heavy users of computational algorithms, like astronomers.
+There are two major differences.
+
+The first difference is that Gnuastro's code and the background information
are segregated: the code is moved within the actual Gnuastro software source
code and the underlying explanations are given here in this book.
+In the source code, every non-trivial step is heavily commented and correlated
with this book, it follows the same logic of this book, and all the programs
follow a similar internal data, function and file structure, see @ref{Program
source}.
+Complementing the code, this book focuses on thoroughly explaining the
concepts behind those codes (history, mathematics, science, software and usage
advise when necessary) along with detailed instructions on how to run the
programs.
+At the expense of frustrating ``professionals'' or ``experts'', this book and
the comments in the code also intentionally avoid jargon and abbreviations.
+The source code and this book are thus intimately linked, and when considered
as a single entity can be thought of as a real (an actual software accompanying
the algorithms) ``Numerical Recipes'' for astronomy.
@cindex GNU free documentation license
@cindex GNU General Public License (GPL)
-The second major, and arguably more important, difference is that
-``Numerical Recipes'' does not allow you to distribute any code that you
-have learned from it. In other words, it does not allow you to release your
-software's source code if you have used their codes, you can only publicly
-release binaries (a black box) to the community. Therefore, while it
-empowers the privileged individual who has access to it, it exacerbates
-social ignorance. Exactly at the opposite end of the spectrum, Gnuastro's
-source code is released under the GNU general public license (GPL) and this
-book is released under the GNU free documentation license. You are
-therefore free to distribute any software you create using parts of
-Gnuastro's source code or text, or figures from this book, see @ref{Your
-rights}.
+The second major, and arguably more important, difference is that ``Numerical
Recipes'' does not allow you to distribute any code that you have learned from
it.
+In other words, it does not allow you to release your software's source code
if you have used their codes, you can only publicly release binaries (a black
box) to the community.
+Therefore, while it empowers the privileged individual who has access to it,
it exacerbates social ignorance.
+Exactly at the opposite end of the spectrum, Gnuastro's source code is
released under the GNU general public license (GPL) and this book is released
under the GNU free documentation license.
+You are therefore free to distribute any software you create using parts of
Gnuastro's source code or text, or figures from this book, see @ref{Your
rights}.
With these principles in mind, Gnuastro's developers aim to impose the
minimum requirements on you (in computer science, engineering and even the
@@ -999,77 +890,41 @@ philosophy}.
@cindex Brahe, Tycho
@cindex Galileo, Galilei
-Without prior familiarity and experience with optics, it is hard to imagine
-how, Galileo could have come up with the idea of modifying the Dutch
-military telescope optics to use in astronomy. Astronomical objects could
-not be seen with the Dutch military design of the telescope. In other
-words, it is unlikely that Galileo could have asked a random optician to
-make modifications (not understood by Galileo) to the Dutch design, to do
-something no astronomer of the time took seriously. In the paradigm of the
-day, what could be the purpose of enlarging geometric spheres (planets) or
-points (stars)? In that paradigm only the position and movement of the
-heavenly bodies was important, and that had already been accurately studied
-(recently by Tycho Brahe).
-
-In the beginning of his ``The Sidereal Messenger'' (published in 1610) he
-cautions the readers on this issue and @emph{before} describing his
-results/observations, Galileo instructs us on how to build a suitable
-instrument. Without a detailed description of @emph{how} he made his tools
-and done his observations, no reasonable person would believe his
-results. Before he actually saw the moons of Jupiter, the mountains on the
-Moon or the crescent of Venus, Galileo was “evasive”@footnote{Galileo
-G. (Translated by Maurice A. Finocchiaro). @emph{The essential
-Galileo}. Hackett publishing company, first edition, 2008.} to
-Kepler. Science is defined by its tools/methods, @emph{not} its raw
-results@footnote{For example, take the following two results on the age of
-the universe: roughly 14 billion years (suggested by the current consensus
-of the standard model of cosmology) and less than 10,000 years (suggested
-from some interpretations of the Bible). Both these numbers are
-@emph{results}. What distinguishes these two results, is the tools/methods
-that were used to derive them. Therefore, as the term ``Scientific method''
-also signifies, a scientific statement it defined by its @emph{method}, not
-its result.}.
-
-The same is true today: science cannot progress with a black box, or poorly
-released code. The source code of a research is the new (abstractified)
-communication language in science, understandable by humans @emph{and}
-computers. Source code (in any programming language) is a language/notation
-designed to express all the details that would be too
-tedious/long/frustrating to report in spoken languages like English,
-similar to mathematic notation.
-
-Today, the quality of the source code that goes into a scientific result
-(and the distribution of that code) is as critical to scientific vitality
-and integrity, as the quality of its written language/English used in
-publishing/distributing its paper. A scientific paper will not even be
-reviewed by any respectable journal if its written in a poor
-language/English. A similar level of quality assessment is thus
-increasingly becoming necessary regarding the codes/methods used to derive
-the results of a scientific paper.
+Without prior familiarity and experience with optics, it is hard to imagine
how, Galileo could have come up with the idea of modifying the Dutch military
telescope optics to use in astronomy.
+Astronomical objects could not be seen with the Dutch military design of the
telescope.
+In other words, it is unlikely that Galileo could have asked a random optician
to make modifications (not understood by Galileo) to the Dutch design, to do
something no astronomer of the time took seriously.
+In the paradigm of the day, what could be the purpose of enlarging geometric
spheres (planets) or points (stars)? In that paradigm only the position and
movement of the heavenly bodies was important, and that had already been
accurately studied (recently by Tycho Brahe).
+
+In the beginning of his ``The Sidereal Messenger'' (published in 1610) he
cautions the readers on this issue and @emph{before} describing his
results/observations, Galileo instructs us on how to build a suitable
instrument.
+Without a detailed description of @emph{how} he made his tools and done his
observations, no reasonable person would believe his results.
+Before he actually saw the moons of Jupiter, the mountains on the Moon or the
crescent of Venus, Galileo was “evasive”@footnote{Galileo G. (Translated by
Maurice A. Finocchiaro). @emph{The essential Galileo}.Hackett publishing
company, first edition, 2008.} to Kepler.
+Science is defined by its tools/methods, @emph{not} its raw
results@footnote{For example, take the following two results on the age of the
universe: roughly 14 billion years (suggested by the current consensus of the
standard model of cosmology) and less than 10,000 years (suggested from some
interpretations of the Bible).
+Both these numbers are @emph{results}.
+What distinguishes these two results, is the tools/methods that were used to
derive them.
+Therefore, as the term ``Scientific method'' also signifies, a scientific
statement it defined by its @emph{method}, not its result.}.
+
+The same is true today: science cannot progress with a black box, or poorly
released code.
+The source code of a research is the new (abstractified) communication
language in science, understandable by humans @emph{and} computers.
+Source code (in any programming language) is a language/notation designed to
express all the details that would be too tedious/long/frustrating to report in
spoken languages like English, similar to mathematic notation.
+
+Today, the quality of the source code that goes into a scientific result (and
the distribution of that code) is as critical to scientific vitality and
integrity, as the quality of its written language/English used in
publishing/distributing its paper.
+A scientific paper will not even be reviewed by any respectable journal if its
written in a poor language/English.
+A similar level of quality assessment is thus increasingly becoming necessary
regarding the codes/methods used to derive the results of a scientific paper.
@cindex Ken Thomson
@cindex Stroustrup, Bjarne
-Bjarne Stroustrup (creator of the C++ language) says: ``@emph{Without
-understanding software, you are reduced to believing in magic}''. Ken
-Thomson (the designer or the Unix operating system) says ``@emph{I abhor a
-system designed for the `user' if that word is a coded pejorative meaning
-`stupid and unsophisticated'}.'' Certainly no scientist (user of a
-scientific software) would want to be considered a believer in magic, or
-stupid and unsophisticated.
-
-This can happen when scientists get too distant from the raw data and
-methods, and are mainly discussing results. In other words, when they feel
-they have tamed Nature into their own high-level (abstract) models
-(creations), and are mainly concerned with scaling up, or industrializing
-those results. Roughly five years before special relativity, and about two
-decades before quantum mechanics fundamentally changed Physics, Lord Kelvin
-is quoted as saying:
+Bjarne Stroustrup (creator of the C++ language) says: ``@emph{Without
understanding software, you are reduced to believing in magic}''.
+Ken Thomson (the designer or the Unix operating system) says ``@emph{I abhor a
system designed for the `user' if that word is a coded pejorative meaning
`stupid and unsophisticated'}.'' Certainly no scientist (user of a scientific
software) would want to be considered a believer in magic, or stupid and
unsophisticated.
+
+This can happen when scientists get too distant from the raw data and methods,
and are mainly discussing results.
+In other words, when they feel they have tamed Nature into their own
high-level (abstract) models (creations), and are mainly concerned with scaling
up, or industrializing those results.
+Roughly five years before special relativity, and about two decades before
quantum mechanics fundamentally changed Physics, Lord Kelvin is quoted as
saying:
@quotation
@cindex Lord Kelvin
@cindex William Thomson
-There is nothing new to be discovered in physics now. All that remains
-is more and more precise measurement.
+There is nothing new to be discovered in physics now.
+All that remains is more and more precise measurement.
@author William Thomson (Lord Kelvin), 1900
@end quotation
@@ -1079,36 +934,21 @@ A few years earlier Albert. A. Michelson made the
following statement:
@quotation
@cindex Albert. A. Michelson
@cindex Michelson, Albert. A.
-The more important fundamental laws and facts of physical science have
-all been discovered, and these are now so firmly established that the
-possibility of their ever being supplanted in consequence of new
-discoveries is exceedingly remote.... Our future discoveries must be
-looked for in the sixth place of decimals.
+The more important fundamental laws and facts of physical science have all
been discovered, and these are now so firmly established that the possibility
of their ever being supplanted in consequence of new discoveries is exceedingly
remote....
+Our future discoveries must be looked for in the sixth place of decimals.
@author Albert. A. Michelson, dedication of Ryerson Physics Lab, U. Chicago
1894
@end quotation
@cindex Puzzle solving scientist
@cindex Scientist, puzzle solver
-If scientists are considered to be more than mere ``puzzle''
-solvers@footnote{Thomas S. Kuhn. @emph{The Structure of Scientific
-Revolutions}, University of Chicago Press, 1962.} (simply adding to the
-decimals of existing values or observing a feature in 10, 100, or 100000
-more galaxies or stars, as Kelvin and Michelson clearly believed), they
-cannot just passively sit back and uncritically repeat the previous
-(observational or theoretical) methods/tools on new data. Today there is a
-wealth of raw telescope images ready (mostly for free) at the finger tips
-of anyone who is interested with a fast enough internet connection to
-download them. The only thing lacking is new ways to analyze this data and
-dig out the treasure that is lying hidden in them to existing methods and
-techniques.
+If scientists are considered to be more than mere ``puzzle''
solvers@footnote{Thomas S. Kuhn. @emph{The Structure of Scientific
Revolutions}, University of Chicago Press, 1962.} (simply adding to the
decimals of existing values or observing a feature in 10, 100, or 100000 more
galaxies or stars, as Kelvin and Michelson clearly believed), they cannot just
passively sit back and uncritically repeat the previous (observational or
theoretical) methods/tools on new data.
+Today there is a wealth of raw telescope images ready (mostly for free) at the
finger tips of anyone who is interested with a fast enough internet connection
to download them.
+The only thing lacking is new ways to analyze this data and dig out the
treasure that is lying hidden in them to existing methods and techniques.
@quotation
@cindex Jaynes E. T.
-New data that we insist on analyzing in terms of old ideas (that is,
-old models which are not questioned) cannot lead us out of the old
-ideas. However many data we record and analyze, we may just keep
-repeating the same old errors, missing the same crucially important
-things that the experiment was competent to find.
+New data that we insist on analyzing in terms of old ideas (that is, old
models which are not questioned) cannot lead us out of the old ideas.
+However many data we record and analyze, we may just keep repeating the same
old errors, missing the same crucially important things that the experiment was
competent to find.
@author Jaynes, Probability theory, the logic of science. Cambridge U. Press
(2003).
@end quotation
@@ -1119,52 +959,29 @@ things that the experiment was competent to find.
@section Your rights
@cindex GNU Texinfo
-The paragraphs below, in this section, belong to the GNU
-Texinfo@footnote{Texinfo is the GNU documentation system. It is used
-to create this book in all the various formats.} manual and are not
-written by us! The name ``Texinfo'' is just changed to ``GNU Astronomy
-Utilities'' or ``Gnuastro'' because they are released under the same
-licenses and it is beautifully written to inform you of your rights.
+The paragraphs below, in this section, belong to the GNU
Texinfo@footnote{Texinfo is the GNU documentation system.
+It is used to create this book in all the various formats.} manual and are not
written by us! The name ``Texinfo'' is just changed to ``GNU Astronomy
Utilities'' or ``Gnuastro'' because they are released under the same licenses
and it is beautifully written to inform you of your rights.
@cindex Free software
@cindex Copyright
@cindex Public domain
-GNU Astronomy Utilities is ``free software''; this means that everyone
-is free to use it and free to redistribute it on certain
-conditions. Gnuastro is not in the public domain; it is copyrighted
-and there are restrictions on its distribution, but these restrictions
-are designed to permit everything that a good cooperating citizen
-would want to do. What is not allowed is to try to prevent others
-from further sharing any version of Gnuastro that they might get from
-you.
-
-Specifically, we want to make sure that you have the right to give
-away copies of the programs that relate to Gnuastro, that you receive
-the source code or else can get it if you want it, that you can change
-these programs or use pieces of them in new free programs, and that
-you know you can do these things.
-
-To make sure that everyone has such rights, we have to forbid you to
-deprive anyone else of these rights. For example, if you distribute
-copies of the Gnuastro related programs, you must give the recipients
-all the rights that you have. You must make sure that they, too,
-receive or can get the source code. And you must tell them their
-rights.
-
-Also, for our own protection, we must make certain that everyone finds
-out that there is no warranty for the programs that relate to Gnuastro.
-If these programs are modified by someone else and passed on, we want
-their recipients to know that what they have is not what we distributed,
-so that any problems introduced by others will not reflect on our
-reputation.
+GNU Astronomy Utilities is ``free software''; this means that everyone is free
to use it and free to redistribute it on certain conditions.
+Gnuastro is not in the public domain; it is copyrighted and there are
restrictions on its distribution, but these restrictions are designed to permit
everything that a good cooperating citizen would want to do.
+What is not allowed is to try to prevent others from further sharing any
version of Gnuastro that they might get from you.
+
+Specifically, we want to make sure that you have the right to give away copies
of the programs that relate to Gnuastro, that you receive the source code or
else can get it if you want it, that you can change these programs or use
pieces of them in new free programs, and that you know you can do these things.
+
+To make sure that everyone has such rights, we have to forbid you to deprive
anyone else of these rights.
+For example, if you distribute copies of the Gnuastro related programs, you
must give the recipients all the rights that you have.
+You must make sure that they, too, receive or can get the source code.
+And you must tell them their rights.
+
+Also, for our own protection, we must make certain that everyone finds out
that there is no warranty for the programs that relate to Gnuastro.
+If these programs are modified by someone else and passed on, we want their
recipients to know that what they have is not what we distributed, so that any
problems introduced by others will not reflect on our reputation.
@cindex GNU General Public License (GPL)
@cindex GNU Free Documentation License
-The full text of the licenses for the Gnuastro book and software can be
-respectively found in @ref{GNU General Public License}@footnote{Also
-available in @url{http://www.gnu.org/copyleft/gpl.html}} and @ref{GNU Free
-Doc. License}@footnote{Also available in
-@url{http://www.gnu.org/copyleft/fdl.html}}.
+The full text of the licenses for the Gnuastro book and software can be
respectively found in @ref{GNU General Public License}@footnote{Also available
in @url{http://www.gnu.org/copyleft/gpl.html}} and @ref{GNU Free Doc.
License}@footnote{Also available in @url{http://www.gnu.org/copyleft/fdl.html}}.
@@ -1173,24 +990,17 @@ Doc. License}@footnote{Also available in
@cindex Names, programs
@cindex Program names
-Gnuastro is a package of independent programs and a collection of
-libraries, here we are mainly concerned with the programs. Each program has
-an official name which consists of one or two words, describing what they
-do. The latter are printed with no space, for example NoiseChisel or
-Crop. On the command-line, you can run them with their executable names
-which start with an @file{ast} and might be an abbreviation of the official
-name, for example @file{astnoisechisel} or @file{astcrop}, see
-@ref{Executable names}.
+Gnuastro is a package of independent programs and a collection of libraries,
here we are mainly concerned with the programs.
+Each program has an official name which consists of one or two words,
describing what they do.
+The latter are printed with no space, for example NoiseChisel or Crop.
+On the command-line, you can run them with their executable names which start
with an @file{ast} and might be an abbreviation of the official name, for
example @file{astnoisechisel} or @file{astcrop}, see @ref{Executable names}.
@pindex ProgramName
@pindex @file{astprogname}
-We will use ``ProgramName'' for a generic official program name and
-@file{astprogname} for a generic executable name. In this book, the
-programs are classified based on what they do and thoroughly explained. An
-alphabetical list of the programs that are installed on your system with
-this installation are given in @ref{Gnuastro programs list}. That list also
-contains the executable names and version numbers along with a one line
-description.
+We will use ``ProgramName'' for a generic official program name and
@file{astprogname} for a generic executable name.
+In this book, the programs are classified based on what they do and thoroughly
explained.
+An alphabetical list of the programs that are installed on your system with
this installation are given in @ref{Gnuastro programs list}.
+That list also contains the executable names and version numbers along with a
one line description.
@@ -1202,54 +1012,31 @@ description.
@cindex Major version number
@cindex Minor version number
@cindex Mailing list: info-gnuastro
-Gnuastro can have two formats of version numbers, for official and
-unofficial releases. Official Gnuastro releases are announced on the
-@command{info-gnuastro} mailing list, they have a version control tag in
-Gnuastro's development history, and their version numbers are formatted
-like ``@file{A.B}''. @file{A} is a major version number, marking a
-significant planned achievement (for example see @ref{GNU Astronomy
-Utilities 1.0}), while @file{B} is a minor version number, see below for
-more on the distinction. Note that the numbers are not decimals, so version
-2.34 is much more recent than version 2.5, which is not equal to 2.50.
-
-Gnuastro also allows a unique version number for unofficial
-releases. Unofficial releases can mark any point in Gnuastro's development
-history. This is done to allow astronomers to easily use any point in the
-version controlled history for their data-analysis and research
-publication. See @ref{Version controlled source} for a complete
-introduction. This section is not just for developers and is intended to
-straightforward and easy to read, so please have a look if you are
-interested in the cutting-edge. This unofficial version number is a
-meaningful and easy to read string of characters, unique to that particular
-point of history. With this feature, users can easily stay up to date with
-the most recent bug fixes and additions that are committed between official
-releases.
-
-The unofficial version number is formatted like: @file{A.B.C-D}. @file{A}
-and @file{B} are the most recent official version number. @file{C} is the
-number of commits that have been made after version @file{A.B}. @file{D} is
-the first 4 or 5 characters of the commit hash number@footnote{Each point
-in Gnuastro's history is uniquely identified with a 40 character long hash
-which is created from its contents and previous history for example:
-@code{5b17501d8f29ba3cd610673261e6e2229c846d35}. So the string @file{D} in
-the version for this commit could be @file{5b17}, or
-@file{5b175}.}. Therefore, the unofficial version number
-`@code{3.92.8-29c8}', corresponds to the 8th commit after the official
-version @code{3.92} and its commit hash begins with @code{29c8}. The
-unofficial version number is sort-able (unlike the raw hash) and as shown
-above is descriptive of the state of the unofficial release. Of course an
-official release is preferred for publication (since its tarballs are
-easily available and it has gone through more tests, making it more
-stable), so if an official release is announced prior to your publication's
-final review, please consider updating to the official release.
-
-The major version number is set by a major goal which is defined by the
-developers and user community before hand, for example see @ref{GNU
-Astronomy Utilities 1.0}. The incremental work done in minor releases are
-commonly small steps in achieving the major goal. Therefore, there is no
-limit on the number of minor releases and the difference between the
-(hypothetical) versions 2.927 and 3.0 can be a small (negligible to the
-user) improvement that finalizes the defined goals.
+Gnuastro can have two formats of version numbers, for official and unofficial
releases.
+Official Gnuastro releases are announced on the @command{info-gnuastro}
mailing list, they have a version control tag in Gnuastro's development
history, and their version numbers are formatted like ``@file{A.B}''.
+@file{A} is a major version number, marking a significant planned achievement
(for example see @ref{GNU Astronomy Utilities 1.0}), while @file{B} is a minor
version number, see below for more on the distinction.
+Note that the numbers are not decimals, so version 2.34 is much more recent
than version 2.5, which is not equal to 2.50.
+
+Gnuastro also allows a unique version number for unofficial releases.
+Unofficial releases can mark any point in Gnuastro's development history.
+This is done to allow astronomers to easily use any point in the version
controlled history for their data-analysis and research publication.
+See @ref{Version controlled source} for a complete introduction.
+This section is not just for developers and is intended to straightforward and
easy to read, so please have a look if you are interested in the cutting-edge.
+This unofficial version number is a meaningful and easy to read string of
characters, unique to that particular point of history.
+With this feature, users can easily stay up to date with the most recent bug
fixes and additions that are committed between official releases.
+
+The unofficial version number is formatted like: @file{A.B.C-D}.
+@file{A} and @file{B} are the most recent official version number.
+@file{C} is the number of commits that have been made after version @file{A.B}.
+@file{D} is the first 4 or 5 characters of the commit hash
number@footnote{Each point in Gnuastro's history is uniquely identified with a
40 character long hash which is created from its contents and previous history
for example: @code{5b17501d8f29ba3cd610673261e6e2229c846d35}.
+So the string @file{D} in the version for this commit could be @file{5b17}, or
@file{5b175}.}.
+Therefore, the unofficial version number `@code{3.92.8-29c8}', corresponds to
the 8th commit after the official version @code{3.92} and its commit hash
begins with @code{29c8}.
+The unofficial version number is sort-able (unlike the raw hash) and as shown
above is descriptive of the state of the unofficial release.
+Of course an official release is preferred for publication (since its tarballs
are easily available and it has gone through more tests, making it more
stable), so if an official release is announced prior to your publication's
final review, please consider updating to the official release.
+
+The major version number is set by a major goal which is defined by the
developers and user community before hand, for example see @ref{GNU Astronomy
Utilities 1.0}.
+The incremental work done in minor releases are commonly small steps in
achieving the major goal.
+Therefore, there is no limit on the number of minor releases and the
difference between the (hypothetical) versions 2.927 and 3.0 can be a small
(negligible to the user) improvement that finalizes the defined goals.
@menu
* GNU Astronomy Utilities 1.0:: Plans for version 1.0 release
@@ -1258,34 +1045,21 @@ user) improvement that finalizes the defined goals.
@node GNU Astronomy Utilities 1.0, , Version numbering, Version numbering
@subsection GNU Astronomy Utilities 1.0
@cindex Gnuastro major version number
-Currently (prior to Gnuastro 1.0), the aim of Gnuastro is to have a
-complete system for data manipulation and analysis at least similar to
-IRAF@footnote{@url{http://iraf.noao.edu/}}. So an astronomer can take all
-the standard data analysis steps (starting from raw data to the final
-reduced product and standard post-reduction tools) with the various
-programs in Gnuastro.
+Currently (prior to Gnuastro 1.0), the aim of Gnuastro is to have a complete
system for data manipulation and analysis at least similar to
IRAF@footnote{@url{http://iraf.noao.edu/}}.
+So an astronomer can take all the standard data analysis steps (starting from
raw data to the final reduced product and standard post-reduction tools) with
the various programs in Gnuastro.
@cindex Shell script
-The maintainers of each camera or detector on a telescope can provide a
-completely transparent shell script or Makefile to the observer for data
-analysis. This script can set configuration files for all the required
-programs to work with that particular camera. The script can then run the
-proper programs in the proper sequence. The user/observer can easily follow
-the standard shell script to understand (and modify) each step and the
-parameters used easily. Bash (or other modern GNU/Linux shell scripts) is
-powerful and made for this gluing job. This will simultaneously improve
-performance and transparency. Shell scripting (or Makefiles) are also basic
-constructs that are easy to learn and readily available as part of the
-Unix-like operating systems. If there is no program to do a desired step,
-Gnuastro's libraries can be used to build specific programs.
+The maintainers of each camera or detector on a telescope can provide a
completely transparent shell script or Makefile to the observer for data
analysis.
+This script can set configuration files for all the required programs to work
with that particular camera.
+The script can then run the proper programs in the proper sequence.
+The user/observer can easily follow the standard shell script to understand
(and modify) each step and the parameters used easily.
+Bash (or other modern GNU/Linux shell scripts) is powerful and made for this
gluing job.
+This will simultaneously improve performance and transparency.
+Shell scripting (or Makefiles) are also basic constructs that are easy to
learn and readily available as part of the Unix-like operating systems.
+If there is no program to do a desired step, Gnuastro's libraries can be used
to build specific programs.
-The main factor is that all observatories or projects can freely contribute
-to Gnuastro and all simultaneously benefit from it (since it doesn't belong
-to any particular one of them), much like how for-profit organizations (for
-example RedHat, or Intel and many others) are major contributors to free
-and open source software for their shared benefit. Gnuastro's copyright has
-been fully awarded to GNU, so it doesn't belong to any particular
-astronomer or astronomical facility or project.
+The main factor is that all observatories or projects can freely contribute to
Gnuastro and all simultaneously benefit from it (since it doesn't belong to any
particular one of them), much like how for-profit organizations (for example
RedHat, or Intel and many others) are major contributors to free and open
source software for their shared benefit.
+Gnuastro's copyright has been fully awarded to GNU, so it doesn't belong to
any particular astronomer or astronomical facility or project.
@@ -1294,24 +1068,15 @@ astronomer or astronomical facility or project.
@node New to GNU/Linux?, Report a bug, Version numbering, Introduction
@section New to GNU/Linux?
-Some astronomers initially install and use a GNU/Linux operating system
-because their necessary tools can only be installed in this environment.
-However, the transition is not necessarily easy. To encourage you in
-investing the patience and time to make this transition, and actually enjoy
-it, we will first start with a basic introduction to GNU/Linux operating
-systems. Afterwards, in @ref{Command-line interface} we'll discuss the
-wonderful benefits of the command-line interface, how it beautifully
-complements the graphic user interface, and why it is worth the (apparently
-steep) learning curve. Finally a complete chapter (@ref{Tutorials}) is
-devoted to real world scenarios of using Gnuastro (on the
-command-line). Therefore if you don't yet feel comfortable with the
-command-line we strongly recommend going through that chapter after
-finishing this section.
-
-You might have already noticed that we are not using the name ``Linux'',
-but ``GNU/Linux''. Please take the time to have a look at the following
-essays and FAQs for a complete understanding of this very important
-distinction.
+Some astronomers initially install and use a GNU/Linux operating system
because their necessary tools can only be installed in this environment.
+However, the transition is not necessarily easy.
+To encourage you in investing the patience and time to make this transition,
and actually enjoy it, we will first start with a basic introduction to
GNU/Linux operating systems.
+Afterwards, in @ref{Command-line interface} we'll discuss the wonderful
benefits of the command-line interface, how it beautifully complements the
graphic user interface, and why it is worth the (apparently steep) learning
curve.
+Finally a complete chapter (@ref{Tutorials}) is devoted to real world
scenarios of using Gnuastro (on the command-line).
+Therefore if you don't yet feel comfortable with the command-line we strongly
recommend going through that chapter after finishing this section.
+
+You might have already noticed that we are not using the name ``Linux'', but
``GNU/Linux''.
+Please take the time to have a look at the following essays and FAQs for a
complete understanding of this very important distinction.
@itemize
@@ -1333,20 +1098,12 @@ distinction.
@cindex GNU/Linux
@cindex GNU C library
@cindex GNU Compiler Collection
-In short, the Linux kernel@footnote{In Unix-like operating systems, the
-kernel connects software and hardware worlds.} is built using the GNU C
-library (glibc) and GNU compiler collection (gcc). The Linux kernel
-software alone is just a means for other software to access the hardware
-resources, it is useless alone: to say “running Linux”, is like saying
-“driving your carburetor”.
-
-To have an operating system, you need lower-level (to build the kernel),
-and higher-level (to use it) software packages. The majority of such
-software in most Unix-like operating systems are GNU software: ``the whole
-system is basically GNU with Linux loaded''. Therefore to acknowledge GNU's
-instrumental role in the creation and usage of the Linux kernel and the
-operating systems that use it, we should call these operating systems
-``GNU/Linux''.
+In short, the Linux kernel@footnote{In Unix-like operating systems, the kernel
connects software and hardware worlds.} is built using the GNU C library
(glibc) and GNU compiler collection (gcc).
+The Linux kernel software alone is just a means for other software to access
the hardware resources, it is useless alone: to say ``running Linux'', is like
saying ``driving your carburetor''.
+
+To have an operating system, you need lower-level (to build the kernel), and
higher-level (to use it) software packages.
+The majority of such software in most Unix-like operating systems are GNU
software: ``the whole system is basically GNU with Linux loaded''.
+Therefore to acknowledge GNU's instrumental role in the creation and usage of
the Linux kernel and the operating systems that use it, we should call these
operating systems ``GNU/Linux''.
@menu
@@ -1360,113 +1117,67 @@ operating systems that use it, we should call these
operating systems
@cindex Command-line user interface
@cindex GUI: graphic user interface
@cindex CLI: command-line user interface
-One aspect of Gnuastro that might be a little troubling to new GNU/Linux
-users is that (at least for the time being) it only has a command-line user
-interface (CLI). This might be contrary to the mostly graphical user
-interface (GUI) experience with proprietary operating systems. Since the
-various actions available aren't always on the screen, the command-line
-interface can be complicated, intimidating, and frustrating for a
-first-time user. This is understandable and also experienced by anyone who
-started using the computer (from childhood) in a graphical user interface
-(this includes most of Gnuastro's authors). Here we hope to convince you of
-the unique benefits of this interface which can greatly enhance your
-productivity while complementing your GUI experience.
+One aspect of Gnuastro that might be a little troubling to new GNU/Linux users
is that (at least for the time being) it only has a command-line user interface
(CLI).
+This might be contrary to the mostly graphical user interface (GUI) experience
with proprietary operating systems.
+Since the various actions available aren't always on the screen, the
command-line interface can be complicated, intimidating, and frustrating for a
first-time user.
+This is understandable and also experienced by anyone who started using the
computer (from childhood) in a graphical user interface (this includes most of
Gnuastro's authors).
+Here we hope to convince you of the unique benefits of this interface which
can greatly enhance your productivity while complementing your GUI experience.
@cindex GNOME 3
-Through GNOME 3@footnote{@url{http://www.gnome.org/}}, most GNU/Linux based
-operating systems now have an advanced and useful GUI. Since the GUI was
-created long after the command-line, some wrongly consider the command line
-to be obsolete. Both interfaces are useful for different tasks. For example
-you can't view an image, video, pdf document or web page on the
-command-line. On the other hand you can't reproduce your results easily in
-the GUI. Therefore they should not be regarded as rivals but as
-complementary user interfaces, here we will outline how the CLI can be
-useful in scientific programs.
-
-You can think of the GUI as a veneer over the CLI to facilitate a small
-subset of all the possible CLI operations. Each click you do on the GUI,
-can be thought of as internally running a different CLI command. So
-asymptotically (if a good designer can design a GUI which is able to show
-you all the possibilities to click on) the GUI is only as powerful as the
-command-line. In practice, such graphical designers are very hard to find
-for every program, so the GUI operations are always a subset of the
-internal CLI commands. For programs that are only made for the GUI, this
-results in not including lots of potentially useful operations. It also
-results in `interface design' to be a crucially important part of any GUI
-program. Scientists don't usually have enough resources to hire a graphical
-designer, also the complexity of the GUI code is far more than CLI code,
-which is harmful for a scientific software, see @ref{Science and its
-tools}.
+Through GNOME 3@footnote{@url{http://www.gnome.org/}}, most GNU/Linux based
operating systems now have an advanced and useful GUI.
+Since the GUI was created long after the command-line, some wrongly consider
the command line to be obsolete.
+Both interfaces are useful for different tasks.
+For example you can't view an image, video, pdf document or web page on the
command-line.
+On the other hand you can't reproduce your results easily in the GUI.
+Therefore they should not be regarded as rivals but as complementary user
interfaces, here we will outline how the CLI can be useful in scientific
programs.
+
+You can think of the GUI as a veneer over the CLI to facilitate a small subset
of all the possible CLI operations.
+Each click you do on the GUI, can be thought of as internally running a
different CLI command.
+So asymptotically (if a good designer can design a GUI which is able to show
you all the possibilities to click on) the GUI is only as powerful as the
command-line.
+In practice, such graphical designers are very hard to find for every program,
so the GUI operations are always a subset of the internal CLI commands.
+For programs that are only made for the GUI, this results in not including
lots of potentially useful operations.
+It also results in `interface design' to be a crucially important part of any
GUI program.
+Scientists don't usually have enough resources to hire a graphical designer,
also the complexity of the GUI code is far more than CLI code, which is harmful
for a scientific software, see @ref{Science and its tools}.
@cindex GUI: repeating operations
-For programs that have a GUI, one action on the GUI (moving and clicking a
-mouse, or tapping a touchscreen) might be more efficient and easier than
-its CLI counterpart (typing the program name and your desired
-configuration). However, if you have to repeat that same action more than
-once, the GUI will soon become frustrating and prone to errors. Unless the
-designers of a particular program decided to design such a system for a
-particular GUI action, there is no general way to run any possible series
-of actions automatically on the GUI.
+For programs that have a GUI, one action on the GUI (moving and clicking a
mouse, or tapping a touchscreen) might be more efficient and easier than its
CLI counterpart (typing the program name and your desired configuration).
+However, if you have to repeat that same action more than once, the GUI will
soon become frustrating and prone to errors.
+Unless the designers of a particular program decided to design such a system
for a particular GUI action, there is no general way to run any possible series
of actions automatically on the GUI.
@cindex GNU Bash
@cindex Reproducible results
@cindex CLI: repeating operations
-On the command-line, you can run any series of of actions which can come
-from various CLI capable programs you have decided your self in any
-possible permutation with one command@footnote{By writing a shell script
-and running it, for example see the tutorials in @ref{Tutorials}.}. This
-allows for much more creativity and exact reproducibility that is not
-possible to a GUI user. For technical and scientific operations, where the
-same operation (using various programs) has to be done on a large set of
-data files, this is crucially important. It also allows exact
-reproducibility which is a foundation principle for scientific results. The
-most common CLI (which is also known as a shell) in GNU/Linux is GNU Bash,
-we strongly encourage you to put aside several hours and go through this
-beautifully explained web page:
-@url{https://flossmanuals.net/command-line/}. You don't need to read or
-even fully understand the whole thing, only a general knowledge of the
-first few chapters are enough to get you going.
-
-Since the operations in the GUI are limited and they are visible, reading a
-manual is not that important in the GUI (most programs don't even have
-any!). However, to give you the creative power explained above, with a CLI
-program, it is best if you first read the manual of any program you are
-using. You don't need to memorize any details, only an understanding of the
-generalities is needed. Once you start working, there are more easier ways
-to remember a particular option or operation detail, see @ref{Getting
-help}.
+On the command-line, you can run any series of of actions which can come from
various CLI capable programs you have decided your self in any possible
permutation with one command@footnote{By writing a shell script and running it,
for example see the tutorials in @ref{Tutorials}.}.
+This allows for much more creativity and exact reproducibility that is not
possible to a GUI user.
+For technical and scientific operations, where the same operation (using
various programs) has to be done on a large set of data files, this is
crucially important.
+It also allows exact reproducibility which is a foundation principle for
scientific results.
+The most common CLI (which is also known as a shell) in GNU/Linux is GNU Bash,
we strongly encourage you to put aside several hours and go through this
beautifully explained web page: @url{https://flossmanuals.net/command-line/}.
+You don't need to read or even fully understand the whole thing, only a
general knowledge of the first few chapters are enough to get you going.
+
+Since the operations in the GUI are limited and they are visible, reading a
manual is not that important in the GUI (most programs don't even have any!).
+However, to give you the creative power explained above, with a CLI program,
it is best if you first read the manual of any program you are using.
+You don't need to memorize any details, only an understanding of the
generalities is needed.
+Once you start working, there are more easier ways to remember a particular
option or operation detail, see @ref{Getting help}.
@cindex GNU Emacs
@cindex Virtual console
-To experience the command-line in its full glory and not in the GUI
-terminal emulator, press the following keys together:
-@key{CTRL+ALT+F4}@footnote{Instead of @key{F4}, you can use any of the keys
-from @key{F1} to @key{F6} for different virtual consoles depending on your
-GNU/Linux distribution, try them all out. You can also run a separate GUI
-from within this console if you want to.} to access the virtual console. To
-return back to your GUI, press the same keys above replacing @key{F4} with
-@key{F7} (or @key{F1}, or @key{F2}, depending on your GNU/Linux
-distribution). In the virtual console, the GUI, with all its distracting
-colors and information, is gone. Enabling you to focus entirely on your
-actual work.
+To experience the command-line in its full glory and not in the GUI terminal
emulator, press the following keys together: @key{CTRL+ALT+F4}@footnote{Instead
of @key{F4}, you can use any of the keys from @key{F1} to @key{F6} for
different virtual consoles depending on your GNU/Linux distribution, try them
all out.
+You can also run a separate GUI from within this console if you want to.} to
access the virtual console.
+To return back to your GUI, press the same keys above replacing @key{F4} with
@key{F7} (or @key{F1}, or @key{F2}, depending on your GNU/Linux distribution).
+In the virtual console, the GUI, with all its distracting colors and
information, is gone.
+Enabling you to focus entirely on your actual work.
@cindex Resource heavy operations
-For operations that use a lot of your system's resources (processing a
-large number of large astronomical images for example), the virtual
-console is the place to run them. This is because the GUI is not
-competing with your research work for your system's RAM and CPU. Since
-the virtual consoles are completely independent, you can even log out
-of your GUI environment to give even more of your hardware resources
-to the programs you are running and thus reduce the operating time.
+For operations that use a lot of your system's resources (processing a large
number of large astronomical images for example), the virtual console is the
place to run them.
+This is because the GUI is not competing with your research work for your
system's RAM and CPU.
+Since the virtual consoles are completely independent, you can even log out of
your GUI environment to give even more of your hardware resources to the
programs you are running and thus reduce the operating time.
@cindex Secure shell
@cindex SSH
@cindex Remote operation
-Since it uses far less system resources, the CLI is also convenient for
-remote access to your computer. Using secure shell (SSH) you can log in
-securely to your system (similar to the virtual console) from anywhere even
-if the connection speeds are low. There are apps for smart phones and
-tablets which allow you to do this.
+Since it uses far less system resources, the CLI is also convenient for remote
access to your computer.
+Using secure shell (SSH) you can log in securely to your system (similar to
the virtual console) from anywhere even if the connection speeds are low.
+There are apps for smart phones and tablets which allow you to do this.
@@ -1484,79 +1195,48 @@ tablets which allow you to do this.
@cindex Halted program
@cindex Program crashing
@cindex Inconsistent results
-According to Wikipedia ``a software bug is an error, flaw, failure, or
-fault in a computer program or system that causes it to produce an
-incorrect or unexpected result, or to behave in unintended ways''. So when
-you see that a program is crashing, not reading your input correctly,
-giving the wrong results, or not writing your output correctly, you have
-found a bug. In such cases, it is best if you report the bug to the
-developers. The programs will also inform you if known impossible
-situations occur (which are caused by something unexpected) and will ask
-the users to report the bug issue.
+According to Wikipedia ``a software bug is an error, flaw, failure, or fault
in a computer program or system that causes it to produce an incorrect or
unexpected result, or to behave in unintended ways''.
+So when you see that a program is crashing, not reading your input correctly,
giving the wrong results, or not writing your output correctly, you have found
a bug.
+In such cases, it is best if you report the bug to the developers.
+The programs will also inform you if known impossible situations occur (which
are caused by something unexpected) and will ask the users to report the bug
issue.
@cindex Bug reporting
-Prior to actually filing a bug report, it is best to search previous
-reports. The issue might have already been found and even solved. The best
-place to check if your bug has already been discussed is the bugs tracker
-on @ref{Gnuastro project webpage} at
-@url{https://savannah.gnu.org/bugs/?group=gnuastro}. In the top search
-fields (under ``Display Criteria'') set the ``Open/Closed'' drop-down menu
-to ``Any'' and choose the respective program or general category of the bug
-in ``Category'' and click the ``Apply'' button. The results colored green
-have already been solved and the status of those colored in red is shown in
-the table.
+Prior to actually filing a bug report, it is best to search previous reports.
+The issue might have already been found and even solved.
+The best place to check if your bug has already been discussed is the bugs
tracker on @ref{Gnuastro project webpage} at
@url{https://savannah.gnu.org/bugs/?group=gnuastro}.
+In the top search fields (under ``Display Criteria'') set the ``Open/Closed''
drop-down menu to ``Any'' and choose the respective program or general category
of the bug in ``Category'' and click the ``Apply'' button.
+The results colored green have already been solved and the status of those
colored in red is shown in the table.
@cindex Version control
-Recently corrected bugs are probably not yet publicly released because
-they are scheduled for the next Gnuastro stable release. If the bug is
-solved but not yet released and it is an urgent issue for you, you can
-get the version controlled source and compile that, see @ref{Version
-controlled source}.
-
-To solve the issue as readily as possible, please follow the following to
-guidelines in your bug report. The
-@url{http://www.chiark.greenend.org.uk/~sgtatham/bugs.html, How to Report
-Bugs Effectively} and @url{http://catb.org/~esr/faqs/smart-questions.html,
-How To Ask Questions The Smart Way} essays also provide some good generic
-advice for all software (don't contact their authors for Gnuastro's
-problems). Mastering the art of giving good bug reports (like asking good
-questions) can greatly enhance your experience with any free and open
-source software. So investing the time to read through these essays will
-greatly reduce your frustration after you see something doesn't work the
-way you feel it is supposed to for a large range of software, not just
-Gnuastro.
+Recently corrected bugs are probably not yet publicly released because they
are scheduled for the next Gnuastro stable release.
+If the bug is solved but not yet released and it is an urgent issue for you,
you can get the version controlled source and compile that, see @ref{Version
controlled source}.
+
+To solve the issue as readily as possible, please follow the following to
guidelines in your bug report.
+The @url{http://www.chiark.greenend.org.uk/~sgtatham/bugs.html, How to Report
Bugs Effectively} and @url{http://catb.org/~esr/faqs/smart-questions.html, How
To Ask Questions The Smart Way} essays also provide some good generic advice
for all software (don't contact their authors for Gnuastro's problems).
+Mastering the art of giving good bug reports (like asking good questions) can
greatly enhance your experience with any free and open source software.
+So investing the time to read through these essays will greatly reduce your
frustration after you see something doesn't work the way you feel it is
supposed to for a large range of software, not just Gnuastro.
@table @strong
@item Be descriptive
-Please provide as many details as possible and be very descriptive. Explain
-what you expected and what the output was: it might be that your
-expectation was wrong. Also please clearly state which sections of the
-Gnuastro book (this book), or other references you have studied to
-understand the problem. This can be useful in correcting the book (adding
-links to likely places where users will check). But more importantly, it
-will be encouraging for the developers, since you are showing how serious
-you are about the problem and that you have actually put some thought into
-it. ``To be able to ask a question clearly is two-thirds of the way to
-getting it answered.'' -- John Ruskin (1819-1900).
+Please provide as many details as possible and be very descriptive.
+Explain what you expected and what the output was: it might be that your
expectation was wrong.
+Also please clearly state which sections of the Gnuastro book (this book), or
other references you have studied to understand the problem.
+This can be useful in correcting the book (adding links to likely places where
users will check).
+But more importantly, it will be encouraging for the developers, since you are
showing how serious you are about the problem and that you have actually put
some thought into it.
+``To be able to ask a question clearly is two-thirds of the way to getting it
answered.'' -- John Ruskin (1819-1900).
@item Individual and independent bug reports
-If you have found multiple bugs, please send them as separate (and
-independent) bugs (as much as possible). This will significantly help
-us in managing and resolving them sooner.
+If you have found multiple bugs, please send them as separate (and
independent) bugs (as much as possible).
+This will significantly help us in managing and resolving them sooner.
@cindex Reproducible bug reports
@item Reproducible bug reports
-If we cannot exactly reproduce your bug, then it is very hard to resolve
-it. So please send us a Minimal working
-example@footnote{@url{http://en.wikipedia.org/wiki/Minimal_Working_Example}}
-along with the description. For example in running a program, please send
-us the full command-line text and the output with the @option{-P} option,
-see @ref{Operating mode options}. If it is caused only for a certain input,
-also send us that input file. In case the input FITS is large, please use
-Crop to only crop the problematic section and make it as small as possible
-so it can easily be uploaded and downloaded and not waste the archive's
-storage, see @ref{Crop}.
+If we cannot exactly reproduce your bug, then it is very hard to resolve it.
+So please send us a Minimal working
example@footnote{@url{http://en.wikipedia.org/wiki/Minimal_Working_Example}}
along with the description.
+For example in running a program, please send us the full command-line text
and the output with the @option{-P} option, see @ref{Operating mode options}.
+If it is caused only for a certain input, also send us that input file.
+In case the input FITS is large, please use Crop to only crop the problematic
section and make it as small as possible so it can easily be uploaded and
downloaded and not waste the archive's storage, see @ref{Crop}.
@end table
@noindent
@@ -1567,33 +1247,28 @@ There are generally two ways to inform us of bugs:
@cindex Mailing list: bug-gnuastro
@cindex @code{bug-gnuastro@@gnu.org}
@item
-Send a mail to @code{bug-gnuastro@@gnu.org}. Any mail you send to this
-address will be distributed through the bug-gnuastro mailing
-list@footnote{@url{https://lists.gnu.org/mailman/listinfo/bug-gnuastro}}. This
-is the simplest way to send us bug reports. The developers will then
-register the bug into the project webpage (next choice) for you.
+Send a mail to @code{bug-gnuastro@@gnu.org}.
+Any mail you send to this address will be distributed through the bug-gnuastro
mailing
list@footnote{@url{https://lists.gnu.org/mailman/listinfo/bug-gnuastro}}.
+This is the simplest way to send us bug reports.
+The developers will then register the bug into the project webpage (next
choice) for you.
@cindex Gnuastro project page
@cindex Support request manager
@cindex Submit new tracker item
@cindex Anonymous bug submission
@item
-Use the Gnuastro project webpage at
-@url{https://savannah.gnu.org/projects/gnuastro/}: There are two ways
-to get to the submission page as listed below. Fill in the form as
-described below and submit it (see @ref{Gnuastro project webpage} for
-more on the project webpage).
+Use the Gnuastro project webpage at
@url{https://savannah.gnu.org/projects/gnuastro/}: There are two ways to get to
the submission page as listed below.
+Fill in the form as described below and submit it (see @ref{Gnuastro project
webpage} for more on the project webpage).
@itemize
@item
-Using the top horizontal menu items, immediately under the top page
-title. Hovering your mouse on ``Support'' will open a drop-down
-list. Select ``Submit new''.
+Using the top horizontal menu items, immediately under the top page title.
+Hovering your mouse on ``Support'' will open a drop-down list.
+Select ``Submit new''.
@item
-In the main body of the page, under the ``Communication tools''
-section, click on ``Submit new item''.
+In the main body of the page, under the ``Communication tools'' section, click
on ``Submit new item''.
@end itemize
@end itemize
@@ -1602,15 +1277,10 @@ section, click on ``Submit new item''.
@cindex Bug tracker
@cindex Task tracker
@cindex Viewing trackers
-Once the items have been registered in the mailing list or webpage,
-the developers will add it to either the ``Bug Tracker'' or ``Task
-Manager'' trackers of the Gnuastro project webpage. These two trackers
-can only be edited by the Gnuastro project developers, but they can be
-browsed by anyone, so you can follow the progress on your bug. You are
-most welcome to join us in developing Gnuastro and fixing the bug you
-have found maybe a good starting point. Gnuastro is designed to be
-easy for anyone to develop (see @ref{Science and its tools}) and there
-is a full chapter devoted to developing it: @ref{Developing}.
+Once the items have been registered in the mailing list or webpage, the
developers will add it to either the ``Bug Tracker'' or ``Task Manager''
trackers of the Gnuastro project webpage.
+These two trackers can only be edited by the Gnuastro project developers, but
they can be browsed by anyone, so you can follow the progress on your bug.
+You are most welcome to join us in developing Gnuastro and fixing the bug you
have found maybe a good starting point.
+Gnuastro is designed to be easy for anyone to develop (see @ref{Science and
its tools}) and there is a full chapter devoted to developing it:
@ref{Developing}.
@node Suggest new feature, Announcements, Report a bug, Introduction
@@ -1618,52 +1288,30 @@ is a full chapter devoted to developing it:
@ref{Developing}.
@cindex Feature requests
@cindex Additions to Gnuastro
-We would always be happy to hear of suggested new features. For every
-program there are already lists of features that we are planning to
-add. You can see the current list of plans from the Gnuastro project
-webpage at @url{https://savannah.gnu.org/projects/gnuastro/} and following
-@clicksequence{``Tasks''@click{}``Browse''} on the horizontal menu at the
-top of the page immediately under the title, see @ref{Gnuastro project
-webpage}. If you want to request a feature to an existing program, click on
-the ``Display Criteria'' above the list and under ``Category'', choose that
-particular program. Under ``Category'' you can also see the existing
-suggestions for new programs or other cases like installation,
-documentation or libraries. Also be sure to set the ``Open/Closed'' value
-to ``Any''.
-
-If the feature you want to suggest is not already listed in the task
-manager, then follow the steps that are fully described in @ref{Report a
-bug}. Please have in mind that the developers are all busy with their own
-astronomical research, and implementing existing ``task''s to add or
-resolving bugs. Gnuastro is a volunteer effort and none of the developers
-are paid for their hard work. So, although we will try our best, please
-don't not expect that your suggested feature be immediately included (with
-the next release of Gnuastro).
-
-The best person to apply the exciting new feature you have in mind is
-you, since you have the motivation and need. In fact Gnuastro is
-designed for making it as easy as possible for you to hack into it
-(add new features, change existing ones and so on), see @ref{Science
-and its tools}. Please have a look at the chapter devoted to
-developing (@ref{Developing}) and start applying your desired
-feature. Once you have added it, you can use it for your own work and
-if you feel you want others to benefit from your work, you can request
-for it to become part of Gnuastro. You can then join the developers
-and start maintaining your own part of Gnuastro. If you choose to take
-this path of action please contact us before hand (@ref{Report a bug})
-so we can avoid possible duplicate activities and get interested
-people in contact.
+We would always be happy to hear of suggested new features.
+For every program there are already lists of features that we are planning to
add.
+You can see the current list of plans from the Gnuastro project webpage at
@url{https://savannah.gnu.org/projects/gnuastro/} and following
@clicksequence{``Tasks''@click{}``Browse''} on the horizontal menu at the top
of the page immediately under the title, see @ref{Gnuastro project webpage}.
+If you want to request a feature to an existing program, click on the
``Display Criteria'' above the list and under ``Category'', choose that
particular program.
+Under ``Category'' you can also see the existing suggestions for new programs
or other cases like installation, documentation or libraries.
+Also be sure to set the ``Open/Closed'' value to ``Any''.
+
+If the feature you want to suggest is not already listed in the task manager,
then follow the steps that are fully described in @ref{Report a bug}.
+Please have in mind that the developers are all busy with their own
astronomical research, and implementing existing ``task''s to add or resolving
bugs.
+Gnuastro is a volunteer effort and none of the developers are paid for their
hard work.
+So, although we will try our best, please don't not expect that your suggested
feature be immediately included (with the next release of Gnuastro).
+
+The best person to apply the exciting new feature you have in mind is you,
since you have the motivation and need.
+In fact Gnuastro is designed for making it as easy as possible for you to hack
into it (add new features, change existing ones and so on), see @ref{Science
and its tools}.
+Please have a look at the chapter devoted to developing (@ref{Developing}) and
start applying your desired feature.
+Once you have added it, you can use it for your own work and if you feel you
want others to benefit from your work, you can request for it to become part of
Gnuastro.
+You can then join the developers and start maintaining your own part of
Gnuastro.
+If you choose to take this path of action please contact us before hand
(@ref{Report a bug}) so we can avoid possible duplicate activities and get
interested people in contact.
@cartouche
@noindent
-@strong{Gnuastro is a collection of low level programs:} As described in
-@ref{Program design philosophy}, a founding principle of Gnuastro is that
-each library or program should be basic and low-level. High level jobs
-should be done by running the separate programs or using separate functions
-in succession through a shell script or calling the libraries by higher
-level functions, see the examples in @ref{Tutorials}. So when making the
-suggestions please consider how your desired job can best be broken into
-separate steps and modularized.
+@strong{Gnuastro is a collection of low level programs:} As described in
@ref{Program design philosophy}, a founding principle of Gnuastro is that each
library or program should be basic and low-level.
+High level jobs should be done by running the separate programs or using
separate functions in succession through a shell script or calling the
libraries by higher level functions, see the examples in @ref{Tutorials}.
+So when making the suggestions please consider how your desired job can best
be broken into separate steps and modularized.
@end cartouche
@@ -1673,19 +1321,15 @@ separate steps and modularized.
@cindex Announcements
@cindex Mailing list: info-gnuastro
-Gnuastro has a dedicated mailing list for making announcements
-(@code{info-gnuastro}). Anyone can subscribe to this mailing list. Anytime
-there is a new stable or test release, an email will be circulated
-there. The email contains a summary of the overall changes along with a
-detailed list (from the @file{NEWS} file). This mailing list is thus the
-best way to stay up to date with new releases, easily learn about the
-updated/new features, or dependencies (see @ref{Dependencies}).
+Gnuastro has a dedicated mailing list for making announcements
(@code{info-gnuastro}).
+Anyone can subscribe to this mailing list.
+Anytime there is a new stable or test release, an email will be circulated
there.
+The email contains a summary of the overall changes along with a detailed list
(from the @file{NEWS} file).
+This mailing list is thus the best way to stay up to date with new releases,
easily learn about the updated/new features, or dependencies (see
@ref{Dependencies}).
-To subscribe to this list, please visit
-@url{https://lists.gnu.org/mailman/listinfo/info-gnuastro}. Traffic (number
-of mails per unit time) in this list is designed to be low: only a handful
-of mails per year. Previous announcements are available on
-@url{http://lists.gnu.org/archive/html/info-gnuastro/, its archive}.
+To subscribe to this list, please visit
@url{https://lists.gnu.org/mailman/listinfo/info-gnuastro}.
+Traffic (number of mails per unit time) in this list is designed to be low:
only a handful of mails per year.
+Previous announcements are available on
@url{http://lists.gnu.org/archive/html/info-gnuastro/, its archive}.
@@ -1697,29 +1341,19 @@ In this book we have the following conventions:
@itemize
@item
-All commands that are to be run on the shell (command-line) prompt as the
-user start with a @command{$}. In case they must be run as a super-user or
-system administrator, they will start with a single @command{#}. If the
-command is in a separate line and next line @code{is also in the code type
-face}, but doesn't have any of the @command{$} or @command{#} signs, then
-it is the output of the command after it is run. As a user, you don't need
-to type those lines. A line that starts with @command{##} is just a comment
-for explaining the command to a human reader and must not be typed.
+All commands that are to be run on the shell (command-line) prompt as the user
start with a @command{$}.
+In case they must be run as a super-user or system administrator, they will
start with a single @command{#}.
+If the command is in a separate line and next line @code{is also in the code
type face}, but doesn't have any of the @command{$} or @command{#} signs, then
it is the output of the command after it is run.
+As a user, you don't need to type those lines.
+A line that starts with @command{##} is just a comment for explaining the
command to a human reader and must not be typed.
@item
-If the command becomes larger than the page width a @key{\} is
-inserted in the code. If you are typing the code by hand on the
-command-line, you don't need to use multiple lines or add the extra
-space characters, so you can omit them. If you want to copy and paste
-these examples (highly discouraged!) then the @key{\} should stay.
-
-The @key{\} character is a shell escape character which is used
-commonly to make characters which have special meaning for the shell
-loose that special place (the shell will not treat them specially if
-there is a @key{\} behind them). When it is a last character in a line
-(the next character is a new-line character) the new-line character
-looses its meaning an the shell sees it as a simple white-space
-character, enabling you to use multiple lines to write your commands.
+If the command becomes larger than the page width a @key{\} is inserted in the
code.
+If you are typing the code by hand on the command-line, you don't need to use
multiple lines or add the extra space characters, so you can omit them.
+If you want to copy and paste these examples (highly discouraged!) then the
@key{\} should stay.
+
+The @key{\} character is a shell escape character which is used commonly to
make characters which have special meaning for the shell loose that special
place (the shell will not treat them specially if there is a @key{\} behind
them).
+When it is a last character in a line (the next character is a new-line
character) the new-line character looses its meaning an the shell sees it as a
simple white-space character, enabling you to use multiple lines to write your
commands.
@end itemize
@@ -1728,76 +1362,98 @@ character, enabling you to use multiple lines to write
your commands.
@node Acknowledgments, , Conventions, Introduction
@section Acknowledgments
-Gnuastro would not have been possible without scholarships and grants from
-several funding institutions. We thus ask that if you used Gnuastro in any
-of your papers/reports, please add the proper citation and acknowledge the
-funding agencies/projects. For details of which papers to cite (may be
-different for different programs) and get the acknowledgment statement to
-include in your paper, please run the relevant programs with the common
-@option{--cite} option like the example commands below (for more on
-@option{--cite}, please see @ref{Operating mode options}).
+Gnuastro would not have been possible without scholarships and grants from
several funding institutions.
+We thus ask that if you used Gnuastro in any of your papers/reports, please
add the proper citation and acknowledge the funding agencies/projects.
+For details of which papers to cite (may be different for different programs)
and get the acknowledgment statement to include in your paper, please run the
relevant programs with the common @option{--cite} option like the example
commands below (for more on @option{--cite}, please see @ref{Operating mode
options}).
@example
$ astnoisechisel --cite
$ astmkcatalog --cite
@end example
-Here, we'll acknowledge all the institutions (and their grants) along with
-the people who helped make Gnuastro possible. The full list of Gnuastro
-authors is available at the start of this book and the @file{AUTHORS} file
-in the source code (both are generated automatically from the version
-controlled history). The plain text file @file{THANKS}, which is also
-distributed along with the source code, contains the list of people and
-institutions who played an indirect role in Gnuastro (not committed any
-code in the Gnuastro version controlled history).
-
-The Japanese Ministry of Education, Culture, Sports, Science, and
-Technology (MEXT) scholarship for Mohammad Akhlaghi's Masters and PhD
-degree in Tohoku University Astronomical Institute had an instrumental role
-in the long term learning and planning that made the idea of Gnuastro
-possible. The very critical view points of Professor Takashi Ichikawa
-(Mohammad's adviser) were also instrumental in the initial ideas and
-creation of Gnuastro. Afterwards, the European Research Council (ERC)
-advanced grant 339659-MUSICOS (Principal investigator: Roland Bacon) was
-vital in the growth and expansion of Gnuastro. Working with Roland at the
-Centre de Recherche Astrophysique de Lyon (CRAL), enabled a thorough
-re-write of the core functionality of all libraries and programs, turning
-Gnuastro into the large collection of generic programs and libraries it is
-today. Work on improving Gnuastro and making it mature is now continuing
-primarily in the Instituto de Astrofisica de Canarias (IAC) and in
-particular in collaboration with Johan Knapen and Ignacio Trujillo.
+Here, we'll acknowledge all the institutions (and their grants) along with the
people who helped make Gnuastro possible.
+The full list of Gnuastro authors is available at the start of this book and
the @file{AUTHORS} file in the source code (both are generated automatically
from the version controlled history).
+The plain text file @file{THANKS}, which is also distributed along with the
source code, contains the list of people and institutions who played an
indirect role in Gnuastro (not committed any code in the Gnuastro version
controlled history).
+
+The Japanese Ministry of Education, Culture, Sports, Science, and Technology
(MEXT) scholarship for Mohammad Akhlaghi's Masters and PhD degree in Tohoku
University Astronomical Institute had an instrumental role in the long term
learning and planning that made the idea of Gnuastro possible.
+The very critical view points of Professor Takashi Ichikawa (Mohammad's
adviser) were also instrumental in the initial ideas and creation of Gnuastro.
+Afterwards, the European Research Council (ERC) advanced grant 339659-MUSICOS
(Principal investigator: Roland Bacon) was vital in the growth and expansion of
Gnuastro.
+Working with Roland at the Centre de Recherche Astrophysique de Lyon (CRAL),
enabled a thorough re-write of the core functionality of all libraries and
programs, turning Gnuastro into the large collection of generic programs and
libraries it is today.
+Work on improving Gnuastro and making it mature is now continuing primarily in
the Instituto de Astrofisica de Canarias (IAC) and in particular in
collaboration with Johan Knapen and Ignacio Trujillo.
@c To the developers: please keep this in the same order as the THANKS file
@c (alphabetical, except for the names in the paragraph above).
-In general, we would like to gratefully thank the following people for
-their useful and constructive comments and suggestions (in alphabetical
-order by family name): Valentina Abril-melgarejo, Marjan Akbari, Hamed
-Altafi, Roland Bacon, Roberto Baena Gall\'e, Zahra Bagheri, Karl Berry,
-Leindert Boogaard, Nicolas Bouch@'e, Fernando Buitrago, Adrian Bunk, Rosa
-Calvi, Nushkia Chamba, Benjamin Clement, Nima Dehdilani, Antonio Diaz Diaz,
-Pierre-Alain Duc, Elham Eftekhari, Gaspar Galaz, Th@'er@`ese Godefroy,
-Madusha Gunawardhana, Bruno Haible, Stephen Hamer, Takashi Ichikawa, Ra@'ul
-Infante Sainz, Brandon Invergo, Oryna Ivashtenko, Aur@'elien Jarno, Lee
-Kelvin, Brandon Kelly, Mohammad-Reza Khellat, Johan Knapen, Geoffry
-Krouchi, Floriane Leclercq, Alan Lefor, Guillaume Mahler, Juan Molina
-Tobar, Francesco Montanari, Dmitrii Oparin, Bertrand Pain, William Pence,
-Mamta Pommier, Bob Proulx, Teymoor Saifollahi, Elham Saremi, Yahya
-Sefidbakht, Alejandro Serrano Borlaff, Jenny Sorce, Lee Spitler, Richard
-Stallman, Michael Stein, Ole Streicher, Alfred M. Szmidt, Michel Tallon,
-Juan C. Tello, @'Eric Thi@'ebaut, Ignacio Trujillo, David Valls-Gabaud,
-Aaron Watkins, Christopher Willmer, Sara Yousefi Taemeh, Johannes Zabl. The
-GNU French Translation Team is also managing the French version of the top
-Gnuastro webpage which we highly appreciate. Finally we should thank all
-the (sometimes anonymous) people in various online forums which patiently
-answered all our small (but important) technical questions.
-
-All work on Gnuastro has been voluntary, but the authors are most grateful
-to the following institutions (in chronological order) for hosting us in
-our research. Where necessary, these institutions have disclaimed any
-ownership of the parts of Gnuastro that were developed there, thus insuring
-the freedom of Gnuastro for the future (see @ref{Copyright assignment}). We
-highly appreciate their support for free software, and thus free science,
-and therefore a free society.
+In general, we would like to gratefully thank the following people for their
useful and constructive comments and suggestions (in alphabetical order by
family name):
+Valentina Abril-melgarejo,
+Marjan Akbari,
+Hamed Altafi,
+Roland Bacon,
+Roberto Baena Gall\'e,
+Zahra Bagheri,
+Karl Berry,
+Leindert Boogaard,
+Nicolas Bouch@'e,
+Fernando Buitrago,
+Adrian Bunk,
+Rosa Calvi,
+Nushkia Chamba,
+Benjamin Clement,
+Nima Dehdilani,
+Antonio Diaz Diaz,
+Pierre-Alain Duc,
+Elham Eftekhari,
+Gaspar Galaz,
+Th@'er@`ese Godefroy,
+Madusha Gunawardhana,
+Bruno Haible,
+Stephen Hamer,
+Takashi Ichikawa,
+Ra@'ul Infante Sainz,
+Brandon Invergo,
+Oryna Ivashtenko,
+Aur@'elien Jarno,
+Lee Kelvin,
+Brandon Kelly,
+Mohammad-Reza Khellat,
+Johan Knapen,
+Geoffry Krouchi,
+Floriane Leclercq,
+Alan Lefor,
+Guillaume Mahler,
+Juan Molina Tobar,
+Francesco Montanari,
+Dmitrii Oparin,
+Bertrand Pain,
+William Pence,
+Mamta Pommier,
+Bob Proulx,
+Teymoor Saifollahi,
+Elham Saremi,
+Yahya Sefidbakht,
+Alejandro Serrano Borlaff,
+Zahra Sharbaf,
+Jenny Sorce,
+Lee Spitler,
+Richard Stallman,
+Michael Stein,
+Ole Streicher,
+Alfred M. Szmidt,
+Michel Tallon,
+Juan C. Tello,
+@'Eric Thi@'ebaut,
+Ignacio Trujillo,
+David Valls-Gabaud,
+Aaron Watkins,
+Michael H.F. Wilkinson,
+Christopher Willmer,
+Sara Yousefi Taemeh,
+Johannes Zabl.
+The GNU French Translation Team is also managing the French version of the top
Gnuastro webpage which we highly appreciate.
+Finally we should thank all the (sometimes anonymous) people in various online
forums which patiently answered all our small (but imporant) technical
questions.
+
+All work on Gnuastro has been voluntary, but the authors are most grateful to
the following institutions (in chronological order) for hosting us in our
research.
+Where necessary, these institutions have disclaimed any ownership of the parts
of Gnuastro that were developed there, thus insuring the freedom of Gnuastro
for the future (see @ref{Copyright assignment}).
+We highly appreciate their support for free software, and thus free science,
and therefore a free society.
@quotation
Tohoku University Astronomical Institute, Sendai, Japan.@*
@@ -1823,79 +1479,41 @@ Instituto de Astrofisica de Canarias (IAC), Tenerife,
Spain.@*
@cindex Tutorial
@cindex Cookbook
-To help new users have a smooth and easy start with Gnuastro, in this
-chapter several thoroughly elaborated tutorials, or cookbooks, are
-provided. These tutorials demonstrate the capabilities of different
-Gnuastro programs and libraries, along with tips and guidelines for the
-best practices of using them in various realistic situations.
-
-We strongly recommend going through these tutorials to get a good feeling
-of how the programs are related (built in a modular design to be used
-together in a pipeline), very similar to the core Unix-based programs that
-they were modeled on. Therefore these tutorials will greatly help in
-optimally using Gnuastro's programs (and generally, the Unix-like
-command-line environment) effectively for your research.
-
-In @ref{Sufi simulates a detection}, we'll start with a
-fictional@footnote{The two historically motivated tutorials (@ref{Sufi
-simulates a detection} is not intended to be a historical reference (the
-historical facts of this fictional tutorial used Wikipedia as a
-reference). This form of presenting a tutorial was influenced by the
-PGF/TikZ and Beamer manuals. They are both packages in in @TeX{} and
-@LaTeX{}, the first is a high-level vector graphic programming environment,
-while with the second you can make presentation slides. On a similar topic,
-there are also some nice words of wisdom for Unix-like systems called
-@url{http://catb.org/esr/writings/unix-koans, Rootless Root}. These also
-have a similar style but they use a mythical figure named Master Foo. If
-you already have some experience in Unix-like systems, you will definitely
-find these Unix Koans entertaining/educative.} tutorial explaining how Abd
-al-rahman Sufi (903 -- 986 A.D., the first recorded description of
-``nebulous'' objects in the heavens is attributed to him) could have used
-some of Gnuastro's programs for a realistic simulation of his observations
-and see if his detection of nebulous objects was trust-able. Because all
-conditions are under control in a simulated/mock environment/dataset, mock
-datasets can be a valuable tool to inspect the limitations of your data
-analysis and processing. But they need to be as realistic as possible, so
-the first tutorial is dedicated to this important step of an analysis.
-
-The next two tutorials (@ref{General program usage tutorial} and
-@ref{Detecting large extended targets}) use real input datasets from some
-of the deep Hubble Space Telescope (HST) images and the Sloan Digital Sky
-Survey (SDSS) respectively. Their aim is to demonstrate some real-world
-problems that many astronomers often face and how they can be be solved
-with Gnuastro's programs.
-
-The ultimate aim of @ref{General program usage tutorial} is to detect
-galaxies in a deep HST image, measure their positions and brightness and
-select those with the strongest colors. In the process, it takes many
-detours to introduce you to the useful capabilities of many of the
-programs. So please be patient in reading it. If you don't have much time
-and can only try one of the tutorials, we recommend this one.
+To help new users have a smooth and easy start with Gnuastro, in this chapter
several thoroughly elaborated tutorials, or cookbooks, are provided.
+These tutorials demonstrate the capabilities of different Gnuastro programs
and libraries, along with tips and guidelines for the best practices of using
them in various realistic situations.
+
+We strongly recommend going through these tutorials to get a good feeling of
how the programs are related (built in a modular design to be used together in
a pipeline), very similar to the core Unix-based programs that they were
modeled on.
+Therefore these tutorials will greatly help in optimally using Gnuastro's
programs (and generally, the Unix-like command-line environment) effectively
for your research.
+
+In @ref{Sufi simulates a detection}, we'll start with a fictional@footnote{The
two historically motivated tutorials (@ref{Sufi simulates a detection} is not
intended to be a historical reference (the historical facts of this fictional
tutorial used Wikipedia as a reference).
+This form of presenting a tutorial was influenced by the PGF/TikZ and Beamer
manuals.
+They are both packages in in @TeX{} and @LaTeX{}, the first is a high-level
vector graphic programming environment, while with the second you can make
presentation slides.
+On a similar topic, there are also some nice words of wisdom for Unix-like
systems called @url{http://catb.org/esr/writings/unix-koans, Rootless Root}.
+These also have a similar style but they use a mythical figure named Master
Foo.
+If you already have some experience in Unix-like systems, you will definitely
find these Unix Koans entertaining/educative.} tutorial explaining how Abd
al-rahman Sufi (903 -- 986 A.D., the first recorded description of ``nebulous''
objects in the heavens is attributed to him) could have used some of Gnuastro's
programs for a realistic simulation of his observations and see if his
detection of nebulous objects was trust-able.
+Because all conditions are under control in a simulated/mock
environment/dataset, mock datasets can be a valuable tool to inspect the
limitations of your data analysis and processing.
+But they need to be as realistic as possible, so the first tutorial is
dedicated to this important step of an analysis.
+
+The next two tutorials (@ref{General program usage tutorial} and
@ref{Detecting large extended targets}) use real input datasets from some of
the deep Hubble Space Telescope (HST) images and the Sloan Digital Sky Survey
(SDSS) respectively.
+Their aim is to demonstrate some real-world problems that many astronomers
often face and how they can be be solved with Gnuastro's programs.
+
+The ultimate aim of @ref{General program usage tutorial} is to detect galaxies
in a deep HST image, measure their positions and brightness and select those
with the strongest colors.
+In the process, it takes many detours to introduce you to the useful
capabilities of many of the programs.
+So please be patient in reading it.
+If you don't have much time and can only try one of the tutorials, we
recommend this one.
@cindex PSF
@cindex Point spread function
-@ref{Detecting large extended targets} deals with a major problem in
-astronomy: effectively detecting the faint outer wings of bright (and
-large) nearby galaxies to extremely low surface brightness levels (roughly
-one quarter of the local noise level in the example discussed). Besides the
-interesting scientific questions in these low-surface brightness features,
-failure to properly detect them will bias the measurements of the
-background objects and the survey's noise estimates. This is an important
-issue, especially in wide surveys. Because bright/large galaxies and
-stars@footnote{Stars also have similarly large and extended wings due to
-the point spread function, see @ref{PSF}.}, cover a significant fraction of
-the survey area.
-
-In these tutorials, we have intentionally avoided too many cross references
-to make it more easy to read. For more information about a particular
-program, you can visit the section with the same name as the program in
-this book. Each program section in the subsequent chapters starts by
-explaining the general concepts behind what it does, for example see
-@ref{Convolve}. If you only want practical information on running a
-program, for example its options/configuration, input(s) and output(s),
-please consult the subsection titled ``Invoking ProgramName'', for example
-see @ref{Invoking astnoisechisel}. For an explanation of the conventions we
-use in the example codes through the book, please see @ref{Conventions}.
+@ref{Detecting large extended targets} deals with a major problem in
astronomy: effectively detecting the faint outer wings of bright (and large)
nearby galaxies to extremely low surface brightness levels (roughly one quarter
of the local noise level in the example discussed).
+Besides the interesting scientific questions in these low-surface brightness
features, failure to properly detect them will bias the measurements of the
background objects and the survey's noise estimates.
+This is an important issue, especially in wide surveys.
+Because bright/large galaxies and stars@footnote{Stars also have similarly
large and extended wings due to the point spread function, see @ref{PSF}.},
cover a significant fraction of the survey area.
+
+In these tutorials, we have intentionally avoided too many cross references to
make it more easy to read.
+For more information about a particular program, you can visit the section
with the same name as the program in this book.
+Each program section in the subsequent chapters starts by explaining the
general concepts behind what it does, for example see @ref{Convolve}.
+If you only want practical information on running a program, for example its
options/configuration, input(s) and output(s), please consult the subsection
titled ``Invoking ProgramName'', for example see @ref{Invoking astnoisechisel}.
+For an explanation of the conventions we use in the example codes through the
book, please see @ref{Conventions}.
@menu
* Sufi simulates a detection:: Simulating a detection.
@@ -1910,82 +1528,56 @@ use in the example codes through the book, please see
@ref{Conventions}.
@cindex Azophi
@cindex Abd al-rahman Sufi
@cindex Sufi, Abd al-rahman
-It is the year 953 A.D. and Abd al-rahman Sufi (903 -- 986
-A.D.)@footnote{In Latin Sufi is known as Azophi. He was an Iranian
-astronomer. His manuscript ``Book of fixed stars'' contains the first
-recorded observations of the Andromeda galaxy, the Large Magellanic Cloud
-and seven other non-stellar or `nebulous' objects.} is in Shiraz as a
-guest astronomer. He had come there to use the advanced 123 centimeter
-astrolabe for his studies on the Ecliptic. However, something was bothering
-him for a long time. While mapping the constellations, there were several
-non-stellar objects that he had detected in the sky, one of them was in the
-Andromeda constellation. During a trip he had to Yemen, Sufi had seen
-another such object in the southern skies looking over the Indian ocean. He
-wasn't sure if such cloud-like non-stellar objects (which he was the first
-to call `Sah@={a}bi' in Arabic or `nebulous') were real astronomical
-objects or if they were only the result of some bias in his
-observations. Could such diffuse objects actually be detected at all with
-his detection technique?
+It is the year 953 A.D. and Abd al-rahman Sufi (903 -- 986 A.D.)@footnote{In
Latin Sufi is known as Azophi.
+He was an Iranian astronomer.
+His manuscript ``Book of fixed stars'' contains the first recorded
observations of the Andromeda galaxy, the Large Magellanic Cloud and seven
other non-stellar or `nebulous' objects.} is in Shiraz as a guest astronomer.
+He had come there to use the advanced 123 centimeter astrolabe for his studies
on the Ecliptic.
+However, something was bothering him for a long time.
+While mapping the constellations, there were several non-stellar objects that
he had detected in the sky, one of them was in the Andromeda constellation.
+During a trip he had to Yemen, Sufi had seen another such object in the
southern skies looking over the Indian ocean.
+He wasn't sure if such cloud-like non-stellar objects (which he was the first
to call `Sah@={a}bi' in Arabic or `nebulous') were real astronomical objects or
if they were only the result of some bias in his observations.
+Could such diffuse objects actually be detected at all with his detection
technique?
@cindex Almagest
@cindex Claudius Ptolemy
@cindex Ptolemy, Claudius
-He still had a few hours left until nightfall (when he would continue
-his studies on the ecliptic) so he decided to find an answer to this
-question. He had thoroughly studied Claudius Ptolemy's (90 -- 168 A.D)
-Almagest and had made lots of corrections to it, in particular in
-measuring the brightness. Using his same experience, he was able to
-measure a magnitude for the objects and wanted to simulate his
-observation to see if a simulated object with the same brightness and
-size could be detected in a simulated noise with the same detection
-technique. The general outline of the steps he wants to take are:
+He still had a few hours left until nightfall (when he would continue his
studies on the ecliptic) so he decided to find an answer to this question.
+He had thoroughly studied Claudius Ptolemy's (90 -- 168 A.D) Almagest and had
made lots of corrections to it, in particular in measuring the brightness.
+Using his same experience, he was able to measure a magnitude for the objects
and wanted to simulate his observation to see if a simulated object with the
same brightness and size could be detected in a simulated noise with the same
detection technique.
+The general outline of the steps he wants to take are:
@enumerate
@item
-Make some mock profiles in an over-sampled image. The initial mock
-image has to be over-sampled prior to convolution or other forms of
-transformation in the image. Through his experiences, Sufi knew that
-this is because the image of heavenly bodies is actually transformed
-by the atmosphere or other sources outside the atmosphere (for example
-gravitational lenses) prior to being sampled on an image. Since that
-transformation occurs on a continuous grid, to best approximate it, he
-should do all the work on a finer pixel grid. In the end he can
-re-sample the result to the initially desired grid size.
+Make some mock profiles in an over-sampled image.
+The initial mock image has to be over-sampled prior to convolution or other
forms of transformation in the image.
+Through his experiences, Sufi knew that this is because the image of heavenly
bodies is actually transformed by the atmosphere or other sources outside the
atmosphere (for example gravitational lenses) prior to being sampled on an
image.
+Since that transformation occurs on a continuous grid, to best approximate it,
he should do all the work on a finer pixel grid.
+In the end he can re-sample the result to the initially desired grid size.
@item
@cindex PSF
-Convolve the image with a point spread function (PSF, see @ref{PSF}) that
-is over-sampled to the same resolution as the mock image. Since he wants to
-finish in a reasonable time and the PSF kernel will be very large due to
-oversampling, he has to use frequency domain convolution which has the side
-effect of dimming the edges of the image. So in the first step above he
-also has to build the image to be larger by at least half the width of the
-PSF convolution kernel on each edge.
+Convolve the image with a point spread function (PSF, see @ref{PSF}) that is
over-sampled to the same resolution as the mock image.
+Since he wants to finish in a reasonable time and the PSF kernel will be very
large due to oversampling, he has to use frequency domain convolution which has
the side effect of dimming the edges of the image.
+So in the first step above he also has to build the image to be larger by at
least half the width of the PSF convolution kernel on each edge.
@item
-With all the transformations complete, the image should be re-sampled
-to the same size of the pixels in his detector.
+With all the transformations complete, the image should be re-sampled to the
same size of the pixels in his detector.
@item
-He should remove those extra pixels on all edges to remove frequency
-domain convolution artifacts in the final product.
+He should remove those extra pixels on all edges to remove frequency domain
convolution artifacts in the final product.
@item
-He should add noise to the (until now, noise-less) mock image. After
-all, all observations have noise associated with them.
+He should add noise to the (until now, noise-less) mock image.
+After all, all observations have noise associated with them.
@end enumerate
-Fortunately Sufi had heard of GNU Astronomy Utilities from a colleague in
-Isfahan (where he worked) and had installed it on his computer a year
-before. It had tools to do all the steps above. He had used MakeProfiles
-before, but wasn't sure which columns he had chosen in his user or system
-wide configuration files for which parameters, see @ref{Configuration
-files}. So to start his simulation, Sufi runs MakeProfiles with the
-@option{-P} option to make sure what columns in a catalog MakeProfiles
-currently recognizes and the output image parameters. In particular, Sufi
-is interested in the recognized columns (shown below).
+Fortunately Sufi had heard of GNU Astronomy Utilities from a colleague in
Isfahan (where he worked) and had installed it on his computer a year before.
+It had tools to do all the steps above.
+He had used MakeProfiles before, but wasn't sure which columns he had chosen
in his user or system wide configuration files for which parameters, see
@ref{Configuration files}.
+So to start his simulation, Sufi runs MakeProfiles with the @option{-P} option
to make sure what columns in a catalog MakeProfiles currently recognizes and
the output image parameters.
+In particular, Sufi is interested in the recognized columns (shown below).
@example
$ astmkprof -P
@@ -2016,46 +1608,23 @@ $ astmkprof -P
@end example
@noindent
-In Gnuastro, column counting starts from 1, so the columns are ordered such
-that the first column (number 1) can be an ID he specifies for each object
-(and MakeProfiles ignores), each subsequent column is used for another
-property of the profile. It is also possible to use column names for the
-values of these options and change these defaults, but Sufi preferred to
-stick to the defaults. Fortunately MakeProfiles has the capability to also
-make the PSF which is to be used on the mock image and using the
-@option{--prepforconv} option, he can also make the mock image to be larger
-by the correct amount and all the sources to be shifted by the correct
-amount.
-
-For his initial check he decides to simulate the nebula in the Andromeda
-constellation. The night he was observing, the PSF had roughly a FWHM of
-about 5 pixels, so as the first row (profile), he defines the PSF
-parameters and sets the radius column (@code{rcol} above, fifth column) to
-@code{5.000}, he also chooses a Moffat function for its functional
-form. Remembering how diffuse the nebula in the Andromeda constellation
-was, he decides to simulate it with a mock S@'{e}rsic index 1.0 profile. He
-wants the output to be 499 pixels by 499 pixels, so he can put the center
-of the mock profile in the centeral pixel of the image (note that an even
-number doesn't have a central element).
-
-Looking at his drawings of it, he decides a reasonable effective radius for
-it would be 40 pixels on this image pixel scale, he sets the axis ratio and
-position angle to approximately correct values too and finally he sets the
-total magnitude of the profile to 3.44 which he had accurately
-measured. Sufi also decides to truncate both the mock profile and PSF at 5
-times the respective radius parameters. In the end he decides to put four
-stars on the four corners of the image at very low magnitudes as a visual
-scale.
-
-Using all the information above, he creates the catalog of mock profiles he
-wants in a file named @file{cat.txt} (short for catalog) using his favorite
-text editor and stores it in a directory named @file{simulationtest} in his
-home directory. [The @command{cat} command prints the contents of a file,
-short for ``concatenation''. So please copy-paste the lines after
-``@command{cat cat.txt}'' into @file{cat.txt} when the editor opens in the
-steps above it, note that there are 7 lines, first one starting with
-@key{#}. Also be careful when copying from the PDF format, the Info, web,
-or text formats shouldn't have any problem]:
+In Gnuastro, column counting starts from 1, so the columns are ordered such
that the first column (number 1) can be an ID he specifies for each object (and
MakeProfiles ignores), each subsequent column is used for another property of
the profile.
+It is also possible to use column names for the values of these options and
change these defaults, but Sufi preferred to stick to the defaults.
+Fortunately MakeProfiles has the capability to also make the PSF which is to
be used on the mock image and using the @option{--prepforconv} option, he can
also make the mock image to be larger by the correct amount and all the sources
to be shifted by the correct amount.
+
+For his initial check he decides to simulate the nebula in the Andromeda
constellation.
+The night he was observing, the PSF had roughly a FWHM of about 5 pixels, so
as the first row (profile), he defines the PSF parameters and sets the radius
column (@code{rcol} above, fifth column) to @code{5.000}, he also chooses a
Moffat function for its functional form.
+Remembering how diffuse the nebula in the Andromeda constellation was, he
decides to simulate it with a mock S@'{e}rsic index 1.0 profile.
+He wants the output to be 499 pixels by 499 pixels, so he can put the center
of the mock profile in the central pixel of the image (note that an even number
doesn't have a central element).
+
+Looking at his drawings of it, he decides a reasonable effective radius for it
would be 40 pixels on this image pixel scale, he sets the axis ratio and
position angle to approximately correct values too and finally he sets the
total magnitude of the profile to 3.44 which he had accurately measured.
+Sufi also decides to truncate both the mock profile and PSF at 5 times the
respective radius parameters.
+In the end he decides to put four stars on the four corners of the image at
very low magnitudes as a visual scale.
+
+Using all the information above, he creates the catalog of mock profiles he
wants in a file named @file{cat.txt} (short for catalog) using his favorite
text editor and stores it in a directory named @file{simulationtest} in his
home directory.
+[The @command{cat} command prints the contents of a file, short for
``concatenation''.
+So please copy-paste the lines after ``@command{cat cat.txt}'' into
@file{cat.txt} when the editor opens in the steps above it, note that there are
7 lines, first one starting with @key{#}.
+Also be careful when copying from the PDF format, the Info, web, or text
formats shouldn't have any problem]:
@example
$ mkdir ~/simulationtest
@@ -2076,8 +1645,8 @@ $ cat cat.txt
@end example
@noindent
-The zero-point magnitude for his observation was 18. Now he has all the
-necessary parameters and runs MakeProfiles with the following command:
+The zero-point magnitude for his observation was 18.
+Now he has all the necessary parameters and runs MakeProfiles with the
following command:
@example
@@ -2102,28 +1671,21 @@ $ls
@cindex Oversample
@noindent
-The file @file{0_cat.fits} is the PSF Sufi had asked for and
-@file{cat.fits} is the image containing the other 5 objects. The PSF is now
-available to him as a separate file for the convolution step. While he was
-preparing the catalog, one of his students approached him and was also
-following the steps. When he opened the image, the student was surprised to
-see that all the stars are only one pixel and not in the shape of the PSF
-as we see when we image the sky at night. So Sufi explained to him that the
-stars will take the shape of the PSF after convolution and this is how they
-would look if we didn't have an atmosphere or an aperture when we took the
-image. The size of the image was also surprising for the student, instead
-of 499 by 499, it was 2615 by 2615 pixels (from the command below):
+The file @file{0_cat.fits} is the PSF Sufi had asked for and @file{cat.fits}
is the image containing the other 5 objects.
+The PSF is now available to him as a separate file for the convolution step.
+While he was preparing the catalog, one of his students approached him and was
also following the steps.
+When he opened the image, the student was surprised to see that all the stars
are only one pixel and not in the shape of the PSF as we see when we image the
sky at night.
+So Sufi explained to him that the stars will take the shape of the PSF after
convolution and this is how they would look if we didn't have an atmosphere or
an aperture when we took the image.
+The size of the image was also surprising for the student, instead of 499 by
499, it was 2615 by 2615 pixels (from the command below):
@example
$ astfits cat.fits -h1 | grep NAXIS
@end example
@noindent
-So Sufi explained why oversampling is important for parts of the image
-where the flux change is significant over a pixel. Sufi then explained to
-him that after convolving we will re-sample the image to get our originally
-desired size/resolution. To convolve the image, Sufi ran the following
-command:
+So Sufi explained why oversampling is important for parts of the image where
the flux change is significant over a pixel.
+Sufi then explained to him that after convolving we will re-sample the image
to get our originally desired size/resolution.
+To convolve the image, Sufi ran the following command:
@example
$ astconvolve --kernel=0_cat.fits cat.fits
@@ -2144,14 +1706,9 @@ $ls
@end example
@noindent
-When convolution finished, Sufi opened the @file{cat_convolved.fits} file
-and showed the effect of convolution to his student and explained to him
-how a PSF with a larger FWHM would make the points even wider. With the
-convolved image ready, they were prepared to re-sample it to the original
-pixel scale Sufi had planned [from the @command{$ astmkprof -P} command
-above, recall that MakeProfiles had over-sampled the image by 5
-times]. Sufi explained the basic concepts of warping the image to his
-student and ran Warp with the following command:
+When convolution finished, Sufi opened the @file{cat_convolved.fits} file and
showed the effect of convolution to his student and explained to him how a PSF
with a larger FWHM would make the points even wider.
+With the convolved image ready, they were prepared to re-sample it to the
original pixel scale Sufi had planned [from the @command{$ astmkprof -P}
command above, recall that MakeProfiles had over-sampled the image by 5 times].
+Sufi explained the basic concepts of warping the image to his student and ran
Warp with the following command:
@example
$ astwarp --scale=1/5 --centeroncorner cat_convolved.fits
@@ -2174,10 +1731,9 @@ NAXIS2 = 523 / length of data axis 2
@end example
@noindent
-@file{cat_convolved_scaled.fits} now has the correct pixel scale. However,
-the image is still larger than what we had wanted, it is 523
-(@mymath{499+12+12}) by 523 pixels. The student is slightly confused, so
-Sufi also re-samples the PSF with the same scale by running
+@file{cat_convolved_scaled.fits} now has the correct pixel scale.
+However, the image is still larger than what we had wanted, it is 523
(@mymath{499+12+12}) by 523 pixels.
+The student is slightly confused, so Sufi also re-samples the PSF with the
same scale by running
@example
$ astwarp --scale=1/5 --centeroncorner 0_cat.fits
@@ -2188,13 +1744,9 @@ NAXIS2 = 25 / length of data axis 2
@end example
@noindent
-Sufi notes that @mymath{25=(2\times12)+1} and goes on to explain how
-frequency space convolution will dim the edges and that is why he added the
-@option{--prepforconv} option to MakeProfiles, see @ref{If convolving
-afterwards}. Now that convolution is done, Sufi can remove those extra
-pixels using Crop with the command below. Crop's @option{--section} option
-accepts coordinates inclusively and counting from 1 (according to the FITS
-standard), so the crop region's first pixel has to be 13, not 12.
+Sufi notes that @mymath{25=(2\times12)+1} and goes on to explain how frequency
space convolution will dim the edges and that is why he added the
@option{--prepforconv} option to MakeProfiles, see @ref{If convolving
afterwards}.
+Now that convolution is done, Sufi can remove those extra pixels using Crop
with the command below.
+Crop's @option{--section} option accepts coordinates inclusively and counting
from 1 (according to the FITS standard), so the crop region's first pixel has
to be 13, not 12.
@example
$ astcrop cat_convolved_scaled.fits --section=13:*-12,13:*-12 \
@@ -2210,19 +1762,13 @@ cat_convolved.fits cat_convolved_scaled.fits
cat.txt
@end example
@noindent
-Finally, @file{cat_convolved_scaled_cropped.fits} is @mymath{499\times499}
-pixels and the mock Andromeda galaxy is centered on the central pixel (open
-the image in a FITS viewer and confirm this by zooming into the center,
-note that an even-width image wouldn't have a central pixel). This is the
-same dimensions as Sufi had desired in the beginning. All this trouble was
-certainly worth it because now there is no dimming on the edges of the
-image and the profile centers are more accurately sampled.
+Finally, @file{cat_convolved_scaled_cropped.fits} is @mymath{499\times499}
pixels and the mock Andromeda galaxy is centered on the central pixel (open the
image in a FITS viewer and confirm this by zooming into the center, note that
an even-width image wouldn't have a central pixel).
+This is the same dimensions as Sufi had desired in the beginning.
+All this trouble was certainly worth it because now there is no dimming on the
edges of the image and the profile centers are more accurately sampled.
-The final step to simulate a real observation would be to add noise to the
-image. Sufi set the zeropoint magnitude to the same value that he set when
-making the mock profiles and looking again at his observation log, he had
-measured the background flux near the nebula had a magnitude of 7 that
-night. So using these values he ran MakeNoise:
+The final step to simulate a real observation would be to add noise to the
image.
+Sufi set the zeropoint magnitude to the same value that he set when making the
mock profiles and looking again at his observation log, he had measured the
background flux near the nebula had a magnitude of 7 that night.
+So using these values he ran MakeNoise:
@example
$ astmknoise --zeropoint=18 --background=7 --output=out.fits \
@@ -2238,26 +1784,17 @@ cat_convolved.fits cat_convolved_scaled.fits
cat.txt
@end example
@noindent
-The @file{out.fits} file now contains the noised image of the mock catalog
-Sufi had asked for. Seeing how the @option{--output} option allows the user
-to specify the name of the output file, the student was confused and wanted
-to know why Sufi hadn't used it before? Sufi then explained to him that for
-intermediate steps it is best to rely on the automatic output, see
-@ref{Automatic output}. Doing so will give all the intermediate files the
-same basic name structure, so in the end you can simply remove them all
-with the Shell's capabilities. So Sufi decided to show this to the student
-by making a shell script from the commands he had used before.
+The @file{out.fits} file now contains the noised image of the mock catalog
Sufi had asked for.
+Seeing how the @option{--output} option allows the user to specify the name of
the output file, the student was confused and wanted to know why Sufi hadn't
used it before? Sufi then explained to him that for intermediate steps it is
best to rely on the automatic output, see @ref{Automatic output}.
+Doing so will give all the intermediate files the same basic name structure,
so in the end you can simply remove them all with the Shell's capabilities.
+So Sufi decided to show this to the student by making a shell script from the
commands he had used before.
-The command-line shell has the capability to read all the separate input
-commands from a file. This is useful when you want to do the same thing
-multiple times, with only the names of the files or minor parameters
-changing between the different instances. Using the shell's history (by
-pressing the up keyboard key) Sufi reviewed all the commands and then he
-retrieved the last 5 commands with the @command{$ history 5} command. He
-selected all those lines he had input and put them in a text file named
-@file{mymock.sh}. Then he defined the @code{edge} and @code{base} shell
-variables for easier customization later. Finally, before every command, he
-added some comments (lines starting with @key{#}) for future readability.
+The command-line shell has the capability to read all the separate input
commands from a file.
+This is useful when you want to do the same thing multiple times, with only
the names of the files or minor parameters changing between the different
instances.
+Using the shell's history (by pressing the up keyboard key) Sufi reviewed all
the commands and then he retrieved the last 5 commands with the @command{$
history 5} command.
+He selected all those lines he had input and put them in a text file named
@file{mymock.sh}.
+Then he defined the @code{edge} and @code{base} shell variables for easier
customization later.
+Finally, before every command, he added some comments (lines starting with
@key{#}) for future readability.
@example
edge=12
@@ -2296,31 +1833,19 @@ rm 0*.fits "$base"*.fits
@end example
@cindex Comments
-He used this chance to remind the student of the importance of comments in
-code or shell scripts: when writing the code, you have a good mental
-picture of what you are doing, so writing comments might seem superfluous
-and excessive. However, in one month when you want to re-use the script,
-you have lost that mental picture and remembering it can be time-consuming
-and frustrating. The importance of comments is further amplified when you
-want to share the script with a friend/colleague. So it is good to
-accompany any script/code with useful comments while you are writing it
-(create a good mental picture of what/why you are doing something).
+He used this chance to remind the student of the importance of comments in
code or shell scripts: when writing the code, you have a good mental picture of
what you are doing, so writing comments might seem superfluous and excessive.
+However, in one month when you want to re-use the script, you have lost that
mental picture and remembering it can be time-consuming and frustrating.
+The importance of comments is further amplified when you want to share the
script with a friend/colleague.
+So it is good to accompany any script/code with useful comments while you are
writing it (create a good mental picture of what/why you are doing something).
@cindex Gedit
@cindex GNU Emacs
-Sufi then explained to the eager student that you define a variable by
-giving it a name, followed by an @code{=} sign and the value you
-want. Then you can reference that variable from anywhere in the script
-by calling its name with a @code{$} prefix. So in the script whenever
-you see @code{$base}, the value we defined for it above is used. If
-you use advanced editors like GNU Emacs or even simpler ones like
-Gedit (part of the GNOME graphical user interface) the variables will
-become a different color which can really help in understanding the
-script. We have put all the @code{$base} variables in double quotation
-marks (@code{"}) so the variable name and the following text do not
-get mixed, the shell is going to ignore the @code{"} after replacing
-the variable value. To make the script executable, Sufi ran the
-following command:
+Sufi then explained to the eager student that you define a variable by giving
it a name, followed by an @code{=} sign and the value you want.
+Then you can reference that variable from anywhere in the script by calling
its name with a @code{$} prefix.
+So in the script whenever you see @code{$base}, the value we defined for it
above is used.
+If you use advanced editors like GNU Emacs or even simpler ones like Gedit
(part of the GNOME graphical user interface) the variables will become a
different color which can really help in understanding the script.
+We have put all the @code{$base} variables in double quotation marks
(@code{"}) so the variable name and the following text do not get mixed, the
shell is going to ignore the @code{"} after replacing the variable value.
+To make the script executable, Sufi ran the following command:
@example
$ chmod +x mymock.sh
@@ -2333,40 +1858,21 @@ Then finally, Sufi ran the script, simply by calling
its file name:
$ ./mymock.sh
@end example
-After the script finished, the only file remaining is the @file{out.fits}
-file that Sufi had wanted in the beginning. Sufi then explained to the
-student how he could run this script anywhere that he has a catalog if the
-script is in the same directory. The only thing the student had to modify
-in the script was the name of the catalog (the value of the @code{base}
-variable in the start of the script) and the value to the @code{edge}
-variable if he changed the PSF size. The student was also happy to hear
-that he won't need to make it executable again when he makes changes later,
-it will remain executable unless he explicitly changes the executable flag
-with @command{chmod}.
-
-The student was really excited, since now, through simple shell
-scripting, he could really speed up his work and run any command in
-any fashion he likes allowing him to be much more creative in his
-works. Until now he was using the graphical user interface which
-doesn't have such a facility and doing repetitive things on it was
-really frustrating and some times he would make mistakes. So he left
-to go and try scripting on his own computer.
-
-Sufi could now get back to his own work and see if the simulated
-nebula which resembled the one in the Andromeda constellation could be
-detected or not. Although it was extremely faint@footnote{The
-brightness of a diffuse object is added over all its pixels to give
-its final magnitude, see @ref{Flux Brightness and magnitude}. So
-although the magnitude 3.44 (of the mock nebula) is orders of
-magnitude brighter than 6 (of the stars), the central galaxy is much
-fainter. Put another way, the brightness is distributed over a large
-area in the case of a nebula.}, fortunately it passed his detection
-tests and he wrote it in the draft manuscript that would later become
-``Book of fixed stars''. He still had to check the other nebula he saw
-from Yemen and several other such objects, but they could wait until
-tomorrow (thanks to the shell script, he only has to define a new
-catalog). It was nearly sunset and they had to begin preparing for the
-night's measurements on the ecliptic.
+After the script finished, the only file remaining is the @file{out.fits} file
that Sufi had wanted in the beginning.
+Sufi then explained to the student how he could run this script anywhere that
he has a catalog if the script is in the same directory.
+The only thing the student had to modify in the script was the name of the
catalog (the value of the @code{base} variable in the start of the script) and
the value to the @code{edge} variable if he changed the PSF size.
+The student was also happy to hear that he won't need to make it executable
again when he makes changes later, it will remain executable unless he
explicitly changes the executable flag with @command{chmod}.
+
+The student was really excited, since now, through simple shell scripting, he
could really speed up his work and run any command in any fashion he likes
allowing him to be much more creative in his works.
+Until now he was using the graphical user interface which doesn't have such a
facility and doing repetitive things on it was really frustrating and some
times he would make mistakes.
+So he left to go and try scripting on his own computer.
+
+Sufi could now get back to his own work and see if the simulated nebula which
resembled the one in the Andromeda constellation could be detected or not.
+Although it was extremely faint@footnote{The brightness of a diffuse object is
added over all its pixels to give its final magnitude, see @ref{Flux Brightness
and magnitude}.
+So although the magnitude 3.44 (of the mock nebula) is orders of magnitude
brighter than 6 (of the stars), the central galaxy is much fainter.
+Put another way, the brightness is distributed over a large area in the case
of a nebula.}, fortunately it passed his detection tests and he wrote it in the
draft manuscript that would later become ``Book of fixed stars''.
+He still had to check the other nebula he saw from Yemen and several other
such objects, but they could wait until tomorrow (thanks to the shell script,
he only has to define a new catalog).
+It was nearly sunset and they had to begin preparing for the night's
measurements on the ecliptic.
@menu
@@ -2378,51 +1884,29 @@ night's measurements on the ecliptic.
@cindex Hubble Space Telescope (HST)
@cindex Colors, broad-band photometry
-Measuring colors of astronomical objects in broad-band or narrow-band
-images is one of the most basic and common steps in astronomical
-analysis. Here, we will use Gnuastro's programs to get a physical scale
-(area at certain redshifts) of the field we are studying, detect objects in
-a Hubble Space Telescope (HST) image, measure their colors and identify the
-ones with the strongest colors, do a visual inspection of these objects and
-inspect spatial position in the image. After this tutorial, you can also
-try the @ref{Detecting large extended targets} tutorial which goes into a
-little more detail on detecting very low surface brightness signal.
-
-During the tutorial, we will take many detours to explain, and practically
-demonstrate, the many capabilities of Gnuastro's programs. In the end you
-will see that the things you learned during this tutorial are much more
-generic than this particular problem and can be used in solving a wide
-variety of problems involving the analysis of data (images or tables). So
-please don't rush, and go through the steps patiently to optimally master
-Gnuastro.
+Measuring colors of astronomical objects in broad-band or narrow-band images
is one of the most basic and common steps in astronomical analysis.
+Here, we will use Gnuastro's programs to get a physical scale (area at certain
redshifts) of the field we are studying, detect objects in a Hubble Space
Telescope (HST) image, measure their colors and identify the ones with the
strongest colors, do a visual inspection of these objects and inspect spatial
position in the image.
+After this tutorial, you can also try the @ref{Detecting large extended
targets} tutorial which goes into a little more detail on detecting very low
surface brightness signal.
+
+During the tutorial, we will take many detours to explain, and practically
demonstrate, the many capabilities of Gnuastro's programs.
+In the end you will see that the things you learned during this tutorial are
much more generic than this particular problem and can be used in solving a
wide variety of problems involving the analysis of data (images or tables).
+So please don't rush, and go through the steps patiently to optimally master
Gnuastro.
@cindex XDF survey
@cindex eXtreme Deep Field (XDF) survey
-In this tutorial, we'll use the HST
-@url{https://archive.stsci.edu/prepds/xdf, eXtreme Deep Field}
-dataset. Like almost all astronomical surveys, this dataset is free for
-download and usable by the public. You will need the following tools in
-this tutorial: Gnuastro, SAO DS9 @footnote{See @ref{SAO ds9}, available at
-@url{http://ds9.si.edu/site/Home.html}}, GNU
-Wget@footnote{@url{https://www.gnu.org/software/wget}}, and AWK (most
-common implementation is GNU
-AWK@footnote{@url{https://www.gnu.org/software/gawk}}).
-
-This tutorial was first prepared for the ``Exploring the Ultra-Low Surface
-Brightness Universe'' workshop (November 2017) at the ISSI in Bern,
-Switzerland. It was further extended in the ``4th Indo-French Astronomy
-School'' (July 2018) organized by LIO, CRAL CNRS UMR5574, UCBL, and IUCAA
-in Lyon, France. We are very grateful to the organizers of these workshops
-and the attendees for the very fruitful discussions and suggestions that
-made this tutorial possible.
+In this tutorial, we'll use the HST@url{https://archive.stsci.edu/prepds/xdf,
eXtreme Deep Field} dataset.
+Like almost all astronomical surveys, this dataset is free for download and
usable by the public.
+You will need the following tools in this tutorial: Gnuastro, SAO DS9
@footnote{See @ref{SAO ds9}, available at
@url{http://ds9.si.edu/site/Home.html}}, GNU
Wget@footnote{@url{https://www.gnu.org/software/wget}}, and AWK (most common
implementation is GNU AWK@footnote{@url{https://www.gnu.org/software/gawk}}).
+
+This tutorial was first prepared for the ``Exploring the Ultra-Low Surface
Brightness Universe'' workshop (November 2017) at the ISSI in Bern, Switzerland.
+It was further extended in the ``4th Indo-French Astronomy School'' (July
2018) organized by LIO, CRAL CNRS UMR5574, UCBL, and IUCAA in Lyon, France.
+We are very grateful to the organizers of these workshops and the attendees
for the very fruitful discussions and suggestions that made this tutorial
possible.
@cartouche
@noindent
-@strong{Write the example commands manually:} Try to type the example
-commands on your terminal manually and use the history feature of your
-command-line (by pressing the ``up'' button to retrieve previous
-commands). Don't simply copy and paste the commands shown here. This will
-help simulate future situations when you are processing your own datasets.
+@strong{Write the example commands manually:} Try to type the example commands
on your terminal manually and use the history feature of your command-line (by
pressing the ``up'' button to retrieve previous commands).
+Don't simply copy and paste the commands shown here.
+This will help simulate future situations when you are processing your own
datasets.
@end cartouche
@@ -2441,54 +1925,40 @@ help simulate future situations when you are processing
your own datasets.
* NoiseChisel optimization for storage:: Dramatically decrease output's
volume.
* Segmentation and making a catalog:: Finding true peaks and creating a
catalog.
* Working with catalogs estimating colors:: Estimating colors using the
catalogs.
-* Aperture photomery:: Doing photometry on a fixed aperture.
+* Aperture photometry:: Doing photometry on a fixed aperture.
* Finding reddest clumps and visual inspection:: Selecting some targets and
inspecting them.
* Citing and acknowledging Gnuastro:: How to cite and acknowledge Gnuastro in
your papers.
@end menu
@node Calling Gnuastro's programs, Accessing documentation, General program
usage tutorial, General program usage tutorial
@subsection Calling Gnuastro's programs
-A handy feature of Gnuastro is that all program names start with
-@code{ast}. This will allow your command-line processor to easily list and
-auto-complete Gnuastro's programs for you. Try typing the following
-command (press @key{TAB} key when you see @code{<TAB>}) to see the list:
+A handy feature of Gnuastro is that all program names start with @code{ast}.
+This will allow your command-line processor to easily list and auto-complete
Gnuastro's programs for you.
+Try typing the following command (press @key{TAB} key when you see
@code{<TAB>}) to see the list:
@example
$ ast<TAB><TAB>
@end example
@noindent
-Any program that starts with @code{ast} (including all Gnuastro programs)
-will be shown. By choosing the subsequent characters of your desired
-program and pressing @key{<TAB><TAB>} again, the list will narrow down and
-the program name will auto-complete once your input characters are
-unambiguous. In short, you often don't need to type the full name of the
-program you want to run.
+Any program that starts with @code{ast} (including all Gnuastro programs) will
be shown.
+By choosing the subsequent characters of your desired program and pressing
@key{<TAB><TAB>} again, the list will narrow down and the program name will
auto-complete once your input characters are unambiguous.
+In short, you often don't need to type the full name of the program you want
to run.
@node Accessing documentation, Setup and data download, Calling Gnuastro's
programs, General program usage tutorial
@subsection Accessing documentation
-Gnuastro contains a large number of programs and it is natural to forget
-the details of each program's options or inputs and outputs. Therefore,
-before starting the analysis steps of this tutorial, let's review how you
-can access this book to refresh your memory any time you want, without
-having to take your hands off the keyboard.
+Gnuastro contains a large number of programs and it is natural to forget the
details of each program's options or inputs and outputs.
+Therefore, before starting the analysis steps of this tutorial, let's review
how you can access this book to refresh your memory any time you want, without
having to take your hands off the keyboard.
-When you install Gnuastro, this book is also installed on your system along
-with all the programs and libraries, so you don't need an internet
-connection to to access/read it. Also, by accessing this book as described
-below, you can be sure that it corresponds to your installed version of
-Gnuastro.
+When you install Gnuastro, this book is also installed on your system along
with all the programs and libraries, so you don't need an internet connection
to to access/read it.
+Also, by accessing this book as described below, you can be sure that it
corresponds to your installed version of Gnuastro.
@cindex GNU Info
-GNU Info@footnote{GNU Info is already available on almost all Unix-like
-operating systems.} is the program in charge of displaying the manual on
-the command-line (for more, see @ref{Info}). To see this whole book on your
-command-line, please run the following command and press subsequent
-keys. Info has its own mini-environment, therefore we'll show the keys that
-must be pressed in the mini-environment after a @code{->} sign. You can
-also ignore anything after the @code{#} sign in the middle of the line,
-they are only for your information.
+GNU Info@footnote{GNU Info is already available on almost all Unix-like
operating systems.} is the program in charge of displaying the manual on the
command-line (for more, see @ref{Info}).
+To see this whole book on your command-line, please run the following command
and press subsequent keys.
+Info has its own mini-environment, therefore we'll show the keys that must be
pressed in the mini-environment after a @code{->} sign.
+You can also ignore anything after the @code{#} sign in the middle of the
line, they are only for your information.
@example
$ info gnuastro # Open the top of the manual.
@@ -2498,62 +1968,47 @@ $ info gnuastro # Open the top of the
manual.
-> q # Quit Info, return to the command-line.
@end example
-The thing that greatly simplifies navigation in Info is the links (regions
-with an underline). You can immediately go to the next link in the page
-with the @key{<TAB>} key and press @key{<ENTER>} on it to go into that part
-of the manual. Try the commands above again, but this time also use
-@key{<TAB>} to go to the links and press @key{<ENTER>} on them to go to the
-respective section of the book. Then follow a few more links and go deeper
-into the book. To return to the previous page, press @key{l} (small L). If
-you are searching for a specific phrase in the whole book (for example an
-option name), press @key{s} and type your search phrase and end it with an
-@key{<ENTER>}.
+The thing that greatly simplifies navigation in Info is the links (regions
with an underline).
+You can immediately go to the next link in the page with the @key{<TAB>} key
and press @key{<ENTER>} on it to go into that part of the manual.
+Try the commands above again, but this time also use @key{<TAB>} to go to the
links and press @key{<ENTER>} on them to go to the respective section of the
book.
+Then follow a few more links and go deeper into the book.
+To return to the previous page, press @key{l} (small L).
+If you are searching for a specific phrase in the whole book (for example an
option name), press @key{s} and type your search phrase and end it with an
@key{<ENTER>}.
-You don't need to start from the top of the manual every time. For example,
-to get to @ref{Invoking astnoisechisel}, run the following command. In
-general, all programs have such an ``Invoking ProgramName'' section in this
-book. These sections are specifically for the description of inputs,
-outputs and configuration options of each program. You can access them
-directly for each program by giving its executable name to Info.
+You don't need to start from the top of the manual every time.
+For example, to get to @ref{Invoking astnoisechisel}, run the following
command.
+In general, all programs have such an ``Invoking ProgramName'' section in this
book.
+These sections are specifically for the description of inputs, outputs and
configuration options of each program.
+You can access them directly for each program by giving its executable name to
Info.
@example
$ info astnoisechisel
@end example
-The other sections don't have such shortcuts. To directly access them from
-the command-line, you need to tell Info to look into Gnuastro's manual,
-then look for the specific section (an unambiguous title is necessary). For
-example, if you only want to review/remember NoiseChisel's @ref{Detection
-options}), just run the following command. Note how case is irrelevant for
-Info when calling a title in this manner.
+The other sections don't have such shortcuts.
+To directly access them from the command-line, you need to tell Info to look
into Gnuastro's manual, then look for the specific section (an unambiguous
title is necessary).
+For example, if you only want to review/remember NoiseChisel's @ref{Detection
options}), just run the following command.
+Note how case is irrelevant for Info when calling a title in this manner.
@example
$ info gnuastro "Detection options"
@end example
-In general, Info is a powerful and convenient way to access this whole book
-with detailed information about the programs you are running. If you are
-not already familiar with it, please run the following command and just
-read along and do what it says to learn it. Don't stop until you feel
-sufficiently fluent in it. Please invest the half an hour's time necessary
-to start using Info comfortably. It will greatly improve your productivity
-and you will start reaping the rewards of this investment very soon.
+In general, Info is a powerful and convenient way to access this whole book
with detailed information about the programs you are running.
+If you are not already familiar with it, please run the following command and
just read along and do what it says to learn it.
+Don't stop until you feel sufficiently fluent in it.
+Please invest the half an hour's time necessary to start using Info
comfortably.
+It will greatly improve your productivity and you will start reaping the
rewards of this investment very soon.
@example
$ info info
@end example
-As a good scientist you need to feel comfortable to play with the
-features/options and avoid (be critical to) using default values as much as
-possible. On the other hand, our human memory is limited, so it is
-important to be able to easily access any part of this book fast and
-remember the option names, what they do and their acceptable values.
+As a good scientist you need to feel comfortable to play with the
features/options and avoid (be critical to) using default values as much as
possible.
+On the other hand, our human memory is limited, so it is important to be able
to easily access any part of this book fast and remember the option names, what
they do and their acceptable values.
-If you just want the option names and a short description, calling the
-program with the @option{--help} option might also be a good solution like
-the first example below. If you know a few characters of the option name,
-you can feed the output to @command{grep} like the second or third example
-commands.
+If you just want the option names and a short description, calling the program
with the @option{--help} option might also be a good solution like the first
example below.
+If you know a few characters of the option name, you can feed the output to
@command{grep} like the second or third example commands.
@example
$ astnoisechisel --help
@@ -2564,28 +2019,23 @@ $ astnoisechisel --help | grep check
@node Setup and data download, Dataset inspection and cropping, Accessing
documentation, General program usage tutorial
@subsection Setup and data download
-The first step in the analysis of the tutorial is to download the necessary
-input datasets. First, to keep things clean, let's create a
-@file{gnuastro-tutorial} directory and continue all future steps in it:
+The first step in the analysis of the tutorial is to download the necessary
input datasets.
+First, to keep things clean, let's create a @file{gnuastro-tutorial} directory
and continue all future steps in it:
@example
$ mkdir gnuastro-tutorial
$ cd gnuastro-tutorial
@end example
-We will be using the near infra-red @url{http://www.stsci.edu/hst/wfc3,
-Wide Field Camera} dataset. If you already have them in another directory
-(for example @file{XDFDIR}, with the same FITS file names), you can set the
-@file{download} directory to be a symbolic link to @file{XDFDIR} with a
-command like this:
+We will be using the near infra-red @url{http://www.stsci.edu/hst/wfc3, Wide
Field Camera} dataset.
+If you already have them in another directory (for example @file{XDFDIR}, with
the same FITS file names), you can set the @file{download} directory to be a
symbolic link to @file{XDFDIR} with a command like this:
@example
$ ln -s XDFDIR download
@end example
@noindent
-Otherwise, when the following images aren't already present on your system,
-you can make a @file{download} directory and download them there.
+Otherwise, when the following images aren't already present on your system,
you can make a @file{download} directory and download them there.
@example
$ mkdir download
@@ -2597,15 +2047,11 @@ $ cd ..
@end example
@noindent
-In this tutorial, we'll just use these two filters. Later, you may need to
-download more filters. To do that, you can use the shell's @code{for} loop
-to download them all in series (one after the other@footnote{Note that you
-only have one port to the internet, so downloading in parallel will
-actually be slower than downloading in series.}) with one command like the
-one below for the WFC3 filters. Put this command instead of the two
-@code{wget} commands above. Recall that all the extra spaces, back-slashes
-(@code{\}), and new lines can be ignored if you are typing on the lines on
-the terminal.
+In this tutorial, we'll just use these two filters.
+Later, you may need to download more filters.
+To do that, you can use the shell's @code{for} loop to download them all in
series (one after the other@footnote{Note that you only have one port to the
internet, so downloading in parallel will actually be slower than downloading
in series.}) with one command like the one below for the WFC3 filters.
+Put this command instead of the two @code{wget} commands above.
+Recall that all the extra spaces, back-slashes (@code{\}), and new lines can
be ignored if you are typing on the lines on the terminal.
@example
$ for f in f105w f125w f140w f160w; do \
@@ -2617,49 +2063,33 @@ $ for f in f105w f125w f140w f160w; do
\
@node Dataset inspection and cropping, Angular coverage on the sky, Setup and
data download, General program usage tutorial
@subsection Dataset inspection and cropping
-First, let's visually inspect the datasets we downloaded in @ref{Setup and
-data download}. Let's take F160W image as an example. Do the steps below
-with the other image(s) too (and later with any dataset that you want to
-work on). It is very important to get a good visual feeling of the dataset
-you intend to use. Also, note how SAO DS9 (used here for visual inspection
-of FITS images) doesn't follow the GNU style of options where ``long'' and
-``short'' options are preceded by @option{--} and @option{-} respectively
-(for example @option{--width} and @option{-w}, see @ref{Options}).
-
-Run the command below to see the F160W image with DS9. Ds9's
-@option{-zscale} scaling is good to visually highlight the low surface
-brightness regions, and as the name suggests, @option{-zoom to fit} will
-fit the whole dataset in the window. If the window is too small, expand it
-with your mouse, then press the ``zoom'' button on the top row of buttons
-above the image. Afterwards, in the bottom row of buttons, press ``zoom
-fit''. You can also zoom in and out by scrolling your mouse or the
-respective operation on your touch-pad when your cursor/pointer is over the
-image.
+First, let's visually inspect the datasets we downloaded in @ref{Setup and
data download}.
+Let's take F160W image as an example.
+Do the steps below with the other image(s) too (and later with any dataset
that you want to work on).
+It is very important to get a good visual feeling of the dataset you intend to
use.
+Also, note how SAO DS9 (used here for visual inspection of FITS images)
doesn't follow the GNU style of options where ``long'' and ``short'' options
are preceded by @option{--} and @option{-} respectively (for example
@option{--width} and @option{-w}, see @ref{Options}).
+
+Run the command below to see the F160W image with DS9.
+Ds9's @option{-zscale} scaling is good to visually highlight the low surface
brightness regions, and as the name suggests, @option{-zoom to fit} will fit
the whole dataset in the window.
+If the window is too small, expand it with your mouse, then press the ``zoom''
button on the top row of buttons above the image.
+Afterwards, in the bottom row of buttons, press ``zoom fit''.
+You can also zoom in and out by scrolling your mouse or the respective
operation on your touch-pad when your cursor/pointer is over the image.
@example
$ ds9 download/hlsp_xdf_hst_wfc3ir-60mas_hudf_f160w_v1_sci.fits \
-zscale -zoom to fit
@end example
-As you hover your mouse over the image, notice how the ``Value'' and
-positional fields on the top of the ds9 window get updated. The first thing
-you might notice is that when you hover the mouse over the regions with no
-data, they have a value of zero. The next thing might be that the dataset
-actually has two ``depth''s (see @ref{Quantifying measurement
-limits}). Recall that this is a combined/reduced image of many exposures,
-and the parts that have more exposures are deeper. In particular, the
-exposure time of the deep inner region is larger than 4 times of the outer
-(more shallower) parts.
+As you hover your mouse over the image, notice how the ``Value'' and
positional fields on the top of the ds9 window get updated.
+The first thing you might notice is that when you hover the mouse over the
regions with no data, they have a value of zero.
+The next thing might be that the dataset actually has two ``depth''s (see
@ref{Quantifying measurement limits}).
+Recall that this is a combined/reduced image of many exposures, and the parts
that have more exposures are deeper.
+In particular, the exposure time of the deep inner region is larger than 4
times of the outer (more shallower) parts.
-To simplify the analysis in this tutorial, we'll only be working on the
-deep field, so let's crop it out of the full dataset. Fortunately the XDF
-survey webpage (above) contains the vertices of the deep flat WFC3-IR
-field. With Gnuastro's Crop program@footnote{To learn more about the crop
-program see @ref{Crop}.}, you can use those vertices to cutout this deep
-region from the larger image. But before that, to keep things organized,
-let's make a directory called @file{flat-ir} and keep the flat
-(single-depth) regions in that directory (with a `@file{xdf-}' suffix for a
-shorter and easier filename).
+To simplify the analysis in this tutorial, we'll only be working on the deep
field, so let's crop it out of the full dataset.
+Fortunately the XDF survey webpage (above) contains the vertices of the deep
flat WFC3-IR field.
+With Gnuastro's Crop program@footnote{To learn more about the crop program see
@ref{Crop}.}, you can use those vertices to cutout this deep region from the
larger image.
+But before that, to keep things organized, let's make a directory called
@file{flat-ir} and keep the flat (single-depth) regions in that directory (with
a `@file{xdf-}' suffix for a shorter and easier filename).
@example
$ mkdir flat-ir
@@ -2673,15 +2103,11 @@ $ astcrop --mode=wcs -h0
--output=flat-ir/xdf-f160w.fits \
download/hlsp_xdf_hst_wfc3ir-60mas_hudf_f160w_v1_sci.fits
@end example
-The only thing varying in the two calls to Gnuastro's Crop program is the
-filter name. Therefore, to simplify the command, and later allow work on
-more filters, we can use the shell's @code{for} loop. Notice how the two
-places where the filter names (@file{f105w} and @file{f160w}) are used
-above have been replaced with @file{$f} (the shell variable that @code{for}
-will update in every loop) below. In such cases, you should generally avoid
-repeating a command manually and use loops like below. To generalize this
-for more filters later, you can simply add the other filter names in the
-first line before the semi-colon (@code{;}).
+The only thing varying in the two calls to Gnuastro's Crop program is the
filter name.
+Therefore, to simplify the command, and later allow work on more filters, we
can use the shell's @code{for} loop.
+Notice how the two places where the filter names (@file{f105w} and
@file{f160w}) are used above have been replaced with @file{$f} (the shell
variable that @code{for} will update in every loop) below.
+In such cases, you should generally avoid repeating a command manually and use
loops like below.
+To generalize this for more filters later, you can simply add the other filter
names in the first line before the semi-colon (@code{;}).
@example
$ rm flat-ir/*.fits
@@ -2693,22 +2119,14 @@ $ for f in f105w f160w; do
\
done
@end example
-Please open these images and inspect them with the same @command{ds9}
-command you used above. You will see how it is nicely flat now and doesn't
-have varying depths. Another important result of this crop is that regions
-with no data now have a NaN (Not-a-Number, or a blank value) value, not
-zero. Zero is a number, and is thus meaningful, especially when you later
-want to NoiseChisel@footnote{As you will see below, unlike most other
-detection algorithms, NoiseChisel detects the objects from their faintest
-parts, it doesn't start with their high signal-to-noise ratio peaks. Since
-the Sky is already subtracted in many images and noise fluctuates around
-zero, zero is commonly higher than the initial threshold applied. Therefore
-not ignoring zero-valued pixels in this image, will cause them to part of
-the detections!}. Generally, when you want to ignore some pixels in a
-dataset, and avoid higher-level ambiguities or complications, it is always
-best to give them blank values (not zero, or some other absurdly large or
-small number). Gnuastro has the Arithmetic program for such cases, and
-we'll introduce it during this tutorial.
+Please open these images and inspect them with the same @command{ds9} command
you used above.
+You will see how it is nicely flat now and doesn't have varying depths.
+Another important result of this crop is that regions with no data now have a
NaN (Not-a-Number, or a blank value) value, not zero.
+Zero is a number, and is thus meaningful, especially when you later want to
NoiseChisel@footnote{As you will see below, unlike most other detection
algorithms, NoiseChisel detects the objects from their faintest parts, it
doesn't start with their high signal-to-noise ratio peaks.
+Since the Sky is already subtracted in many images and noise fluctuates around
zero, zero is commonly higher than the initial threshold applied.
+Therefore not ignoring zero-valued pixels in this image, will cause them to
part of the detections!}.
+Generally, when you want to ignore some pixels in a dataset, and avoid
higher-level ambiguities or complications, it is always best to give them blank
values (not zero, or some other absurdly large or small number).
+Gnuastro has the Arithmetic program for such cases, and we'll introduce it
during this tutorial.
@node Angular coverage on the sky, Cosmological coverage, Dataset inspection
and cropping, General program usage tutorial
@subsection Angular coverage on the sky
@@ -2716,44 +2134,29 @@ we'll introduce it during this tutorial.
@cindex @code{CDELT}
@cindex Coordinate scales
@cindex Scales, coordinate
-This is the deepest image we currently have of the sky. The first thing
-that comes to mind may be this: ``How large is this field on the
-sky?''. The FITS world coordinate system (WCS) meta data standard contains
-the key to answering this question: the @code{CDELT} keyword@footnote{In
-the FITS standard, the @code{CDELT} keywords (@code{CDELT1} and
-@code{CDELT2} in a 2D image) specify the scales of each coordinate. In the
-case of this image it is in units of degrees-per-pixel. See Section 8 of
-the @url{https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf,
-FITS standard} for more. In short, with the @code{CDELT} convention,
-rotation (@code{PC} or @code{CD} keywords) and scales (@code{CDELT}) are
-separated. In the FITS standard the @code{CDELT} keywords are
-optional. When @code{CDELT} keywords aren't present, the @code{PC} matrix
-is assumed to contain @emph{both} the coordinate rotation and scales. Note
-that not all FITS writers use the @code{CDELT} convention. So you might not
-find the @code{CDELT} keywords in the WCS meta data of some FITS
-files. However, all Gnuastro programs (which use the default FITS keyword
-writing format of WCSLIB) write their output WCS with the the @code{CDELT}
-convention, even if the input doesn't have it. If your dataset doesn't use
-the @code{CDELT} convension, you can feed it to any (simple) Gnuastro
-program (for example Arithmetic) and the output will have the @code{CDELT}
-keyword.}.
-
-With the commands below, we'll use @code{CDELT} (along with the image size)
-to find the answer. The lines starting with @code{##} are just comments for
-you to read and understand each command. Don't type them on the
-terminal. The commands are intentionally repetitive in some places to
-better understand each step and also to demonstrate the beauty of
-command-line features like history, variables, pipes and loops (which you
-will commonly use as you master the command-line).
+This is the deepest image we currently have of the sky.
+The first thing that comes to mind may be this: ``How large is this field on
the sky?''.
+The FITS world coordinate system (WCS) meta data standard contains the key to
answering this question: the @code{CDELT} keyword@footnote{In the FITS
standard, the @code{CDELT} keywords (@code{CDELT1} and @code{CDELT2} in a 2D
image) specify the scales of each coordinate.
+In the case of this image it is in units of degrees-per-pixel.
+See Section 8 of the
@url{https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf, FITS
standard} for more.
+In short, with the @code{CDELT} convention, rotation (@code{PC} or @code{CD}
keywords) and scales (@code{CDELT}) are separated.
+In the FITS standard the @code{CDELT} keywords are optional.
+When @code{CDELT} keywords aren't present, the @code{PC} matrix is assumed to
contain @emph{both} the coordinate rotation and scales.
+Note that not all FITS writers use the @code{CDELT} convention.
+So you might not find the @code{CDELT} keywords in the WCS meta data of some
FITS files.
+However, all Gnuastro programs (which use the default FITS keyword writing
format of WCSLIB) write their output WCS with the @code{CDELT} convention, even
if the input doesn't have it.
+If your dataset doesn't use the @code{CDELT} convention, you can feed it to
any (simple) Gnuastro program (for example Arithmetic) and the output will have
the @code{CDELT} keyword.}.
+
+With the commands below, we'll use @code{CDELT} (along with the image size) to
find the answer.
+The lines starting with @code{##} are just comments for you to read and
understand each command.
+Don't type them on the terminal.
+The commands are intentionally repetitive in some places to better understand
each step and also to demonstrate the beauty of command-line features like
history, variables, pipes and loops (which you will commonly use as you master
the command-line).
@cartouche
@noindent
-@strong{Use shell history:} Don't forget to make effective use of your
-shell's history: you don't have to re-type previous command to add
-something to them. This is especially convenient when you just want to make
-a small change to your previous command. Press the ``up'' key on your
-keyboard (possibly multiple times) to see your previous command(s) and
-modify them accordingly.
+@strong{Use shell history:} Don't forget to make effective use of your shell's
history: you don't have to re-type previous command to add something to them.
+This is especially convenient when you just want to make a small change to
your previous command.
+Press the ``up'' key on your keyboard (possibly multiple times) to see your
previous command(s) and modify them accordingly.
@end cartouche
@example
@@ -2799,35 +2202,26 @@ $ echo $n $r
$ echo $n $r | awk '@{print $1 * ($2^2) * 3600@}'
@end example
-The output of the last command (area of this field) is 4.03817 (or
-approximately 4.04) arc-minutes squared. Just for comparison, this is
-roughly 175 times smaller than the average moon's angular area (with a
-diameter of 30arc-minutes or half a degree).
+The output of the last command (area of this field) is 4.03817 (or
approximately 4.04) arc-minutes squared.
+Just for comparison, this is roughly 175 times smaller than the average moon's
angular area (with a diameter of 30arc-minutes or half a degree).
@cindex GNU AWK
@cartouche
@noindent
-@strong{AWK for table/value processing:} As you saw above AWK is a powerful
-and simple tool for text processing. You will see it often in shell
-scripts. GNU AWK (the most common implementation) comes with a free and
-wonderful @url{https://www.gnu.org/software/gawk/manual/, book} in the same
-format as this book which will allow you to master it nicely. Just like
-this manual, you can also access GNU AWK's manual on the command-line
-whenever necessary without taking your hands off the keyboard. Just run
-@code{info awk}.
+@strong{AWK for table/value processing:} As you saw above AWK is a powerful
and simple tool for text processing.
+You will see it often in shell scripts.
+GNU AWK (the most common implementation) comes with a free and wonderful
@url{https://www.gnu.org/software/gawk/manual/, book} in the same format as
this book which will allow you to master it nicely.
+Just like this manual, you can also access GNU AWK's manual on the
command-line whenever necessary without taking your hands off the keyboard.
+Just run @code{info awk}.
@end cartouche
@node Cosmological coverage, Building custom programs with the library,
Angular coverage on the sky, General program usage tutorial
@subsection Cosmological coverage
-Having found the angular coverage of the dataset in @ref{Angular coverage
-on the sky}, we can now use Gnuastro to answer a more physically motivated
-question: ``How large is this area at different redshifts?''. To get a
-feeling of the tangential area that this field covers at redshift 2, you
-can use Gnuastro's CosmicCalcular program (@ref{CosmicCalculator}). In
-particular, you need the tangential distance covered by 1 arc-second as raw
-output. Combined with the field's area that was measured before, we can
-calculate the tangential distance in Mega Parsecs squared (@mymath{Mpc^2}).
+Having found the angular coverage of the dataset in @ref{Angular coverage on
the sky}, we can now use Gnuastro to answer a more physically motivated
question: ``How large is this area at different redshifts?''.
+To get a feeling of the tangential area that this field covers at redshift 2,
you can use Gnuastro's CosmicCalcular program (@ref{CosmicCalculator}).
+In particular, you need the tangential distance covered by 1 arc-second as raw
output.
+Combined with the field's area that was measured before, we can calculate the
tangential distance in Mega Parsecs squared (@mymath{Mpc^2}).
@example
## Print general cosmological properties at redshift 2 (for example).
@@ -2863,9 +2257,8 @@ $ echo $k $a | awk '@{print $1 * $2 / 1e6@}'
@end example
@noindent
-At redshift 2, this field therefore covers approximately 1.07
-@mymath{Mpc^2}. If you would like to see how this tangential area changes
-with redshift, you can use a shell loop like below.
+At redshift 2, this field therefore covers approximately 1.07 @mymath{Mpc^2}.
+If you would like to see how this tangential area changes with redshift, you
can use a shell loop like below.
@example
$ for z in 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0; do \
@@ -2875,11 +2268,9 @@ $ for z in 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0; do
\
@end example
@noindent
-Fortunately, the shell has a useful tool/program to print a sequence of
-numbers that is nicely called @code{seq}. You can use it instead of typing
-all the different redshifts in this example. For example the loop below
-will calculate and print the tangential coverage of this field across a
-larger range of redshifts (0.1 to 5) and with finer increments of 0.1.
+Fortunately, the shell has a useful tool/program to print a sequence of
numbers that is nicely called @code{seq}.
+You can use it instead of typing all the different redshifts in this example.
+For example the loop below will calculate and print the tangential coverage of
this field across a larger range of redshifts (0.1 to 5) and with finer
increments of 0.1.
@example
$ for z in $(seq 0.1 0.1 5); do \
@@ -2891,30 +2282,18 @@ $ for z in $(seq 0.1 0.1 5); do
\
@node Building custom programs with the library, Option management and
configuration files, Cosmological coverage, General program usage tutorial
@subsection Building custom programs with the library
-In @ref{Cosmological coverage}, we repeated a certain calculation/output of
-a program multiple times using the shell's @code{for} loop. This simple way
-repeating a calculation is great when it is only necessary once. However,
-if you commonly need this calculation and possibly for a larger number of
-redshifts at higher precision, the command above can be slow (try it out to
-see).
-
-This slowness of the repeated calls to a generic program (like
-CosmicCalculator), is because it can have a lot of overhead on each
-call. To be generic and easy to operate, it has to parse the command-line
-and all configuration files (see @ref{Option management and configuration
-files}) which contain human-readable characters and need a lot of
-pre-processing to be ready for processing by the computer. Afterwards,
-CosmicCalculator has to check the sanity of its inputs and check which of
-its many options you have asked for. All the this pre-processing takes as
-much time as the high-level calculation you are requesting, and it has to
-re-do all of these for every redshift in your loop.
-
-To greatly speed up the processing, you can directly access the core
-work-horse of CosmicCalculator without all that overhead by designing your
-custom program for this job. Using Gnuastro's library, you can write your
-own tiny program particularly designed for this exact calculation (and
-nothing else!). To do that, copy and paste the following C program in a
-file called @file{myprogram.c}.
+In @ref{Cosmological coverage}, we repeated a certain calculation/output of a
program multiple times using the shell's @code{for} loop.
+This simple way repeating a calculation is great when it is only necessary
once.
+However, if you commonly need this calculation and possibly for a larger
number of redshifts at higher precision, the command above can be slow (try it
out to see).
+
+This slowness of the repeated calls to a generic program (like
CosmicCalculator), is because it can have a lot of overhead on each call.
+To be generic and easy to operate, it has to parse the command-line and all
configuration files (see @ref{Option management and configuration files}) which
contain human-readable characters and need a lot of pre-processing to be ready
for processing by the computer.
+Afterwards, CosmicCalculator has to check the sanity of its inputs and check
which of its many options you have asked for.
+All the this pre-processing takes as much time as the high-level calculation
you are requesting, and it has to re-do all of these for every redshift in your
loop.
+
+To greatly speed up the processing, you can directly access the core
work-horse of CosmicCalculator without all that overhead by designing your
custom program for this job.
+Using Gnuastro's library, you can write your own tiny program particularly
designed for this exact calculation (and nothing else!).
+To do that, copy and paste the following C program in a file called
@file{myprogram.c}.
@example
#include <math.h>
@@ -2958,46 +2337,31 @@ $ astbuildprog myprogram.c
@end example
@noindent
-In the command above, you used Gnuastro's BuildProgram program. Its job is
-to greatly simplify the compilation, linking and running of simple C
-programs that use Gnuastro's library (like this one). BuildProgram is
-designed to manage Gnuastro's dependencies, compile and link your custom
-program and then run it.
+In the command above, you used Gnuastro's BuildProgram program.
+Its job is to greatly simplify the compilation, linking and running of simple
C programs that use Gnuastro's library (like this one).
+BuildProgram is designed to manage Gnuastro's dependencies, compile and link
your custom program and then run it.
-Did you notice how your custom program was much faster than the repeated
-calls to CosmicCalculator in the previous section? You might have noticed
-that a new file called @file{myprogram} is also created in the
-directory. This is the compiled program that was created and run by the
-command above (its in binary machine code format, not human-readable any
-more). You can run it again to get the same results with a command like
-this:
+Did you notice how your custom program was much faster than the repeated calls
to CosmicCalculator in the previous section? You might have noticed that a new
file called @file{myprogram} is also created in the directory.
+This is the compiled program that was created and run by the command above
(its in binary machine code format, not human-readable any more).
+You can run it again to get the same results with a command like this:
@example
$ ./myprogram
@end example
-The efficiency of your custom @file{myprogram} compared to repeated calls
-to CosmicCalculator is because in the latter, the requested processing is
-comparable to the necessary overheads. For other programs that take large
-input datasets and do complicated processing on them, the overhead is
-usually negligible compared to the processing. In such cases, the libraries
-are only useful if you want a different/new processing compared to the
-functionalities in Gnuastro's existing programs.
+The efficiency of your custom @file{myprogram} compared to repeated calls to
CosmicCalculator is because in the latter, the requested processing is
comparable to the necessary overheads.
+For other programs that take large input datasets and do complicated
processing on them, the overhead is usually negligible compared to the
processing.
+In such cases, the libraries are only useful if you want a different/new
processing compared to the functionalities in Gnuastro's existing programs.
-Gnuastro has a large library which is used extensively by all the
-programs. In other words, the library is like the skeleton of Gnuastro. For
-the full list of available functions classified by context, please see
-@ref{Gnuastro library}. Gnuastro's library and BuildProgram are created to
-make it easy for you to use these powerful features as you like. This gives
-you a high level of creativity, while also providing efficiency and
-robustness. Several other complete working examples (involving images and
-tables) of Gnuastro's libraries can be see in @ref{Library demo
-programs}.
+Gnuastro has a large library which is used extensively by all the programs.
+In other words, the library is like the skeleton of Gnuastro.
+For the full list of available functions classified by context, please see
@ref{Gnuastro library}.
+Gnuastro's library and BuildProgram are created to make it easy for you to use
these powerful features as you like.
+This gives you a high level of creativity, while also providing efficiency and
robustness.
+Several other complete working examples (involving images and tables) of
Gnuastro's libraries can be see in @ref{Library demo programs}.
-But for this tutorial, let's stop discussing the libraries at this point in
-and get back to Gnuastro's already built programs which don't need any
-programming. But before continuing, let's clean up the files we don't need
-any more:
+But for this tutorial, let's stop discussing the libraries at this point in
and get back to Gnuastro's already built programs which don't need any
programming.
+But before continuing, let's clean up the files we don't need any more:
@example
$ rm myprogram*
@@ -3006,61 +2370,43 @@ $ rm myprogram*
@node Option management and configuration files, Warping to a new pixel grid,
Building custom programs with the library, General program usage tutorial
@subsection Option management and configuration files
-None of Gnuastro's programs keep a default value internally within their
-code. However, when you ran CosmicCalculator only with the @option{-z2}
-option (not specifying the cosmological parameters) in @ref{Cosmological
-coverage}, it completed its processing and printed results. Where did the
-necessary cosmological parameters (like the matter density and etc) that
-are necessary for its calculations come from? Fast reply: the values come
-from a configuration file (see @ref{Configuration file precedence}).
-
-CosmicCalculator is a small program with a limited set of
-parameters/options. Therefore, let's use it to discuss configuration files
-in Gnuastro (for more, you can always see @ref{Configuration
-files}). Configuration files are an important part of all Gnuastro's
-programs, especially the ones with a large number of options, so its
-important to understand this part well .
-
-Once you get comfortable with configuration files here, you can make good
-use of them in all Gnuastro programs (for example, NoiseChisel). For
-example, to do optimal detection on various datasets, you can have
-configuration files for different noise properties. The configuration of
-each program (besides its version) is vital for the reproducibility of your
-results, so it is important to manage them properly.
-
-As we saw above, the full list of the options in all Gnuastro programs can
-be seen with the @option{--help} option. Try calling it with
-CosmicCalculator as shown below. Note how options are grouped by context to
-make it easier to find your desired option. However, in each group, options
-are ordered alphabetically.
+None of Gnuastro's programs keep a default value internally within their code.
+However, when you ran CosmicCalculator only with the @option{-z2} option (not
specifying the cosmological parameters) in @ref{Cosmological coverage}, it
completed its processing and printed results.
+Where did the necessary cosmological parameters (like the matter density, etc)
that are necessary for its calculations come from? Fast reply: the values come
from a configuration file (see @ref{Configuration file precedence}).
+
+CosmicCalculator is a small program with a limited set of parameters/options.
+Therefore, let's use it to discuss configuration files in Gnuastro (for more,
you can always see @ref{Configuration files}).
+Configuration files are an important part of all Gnuastro's programs,
especially the ones with a large number of options, so its important to
understand this part well .
+
+Once you get comfortable with configuration files here, you can make good use
of them in all Gnuastro programs (for example, NoiseChisel).
+For example, to do optimal detection on various datasets, you can have
configuration files for different noise properties.
+The configuration of each program (besides its version) is vital for the
reproducibility of your results, so it is important to manage them properly.
+
+As we saw above, the full list of the options in all Gnuastro programs can be
seen with the @option{--help} option.
+Try calling it with CosmicCalculator as shown below.
+Note how options are grouped by context to make it easier to find your desired
option.
+However, in each group, options are ordered alphabetically.
@example
$ astcosmiccal --help
@end example
@noindent
-The options that need a value have an @key{=} sign after their long version
-and @code{FLT}, @code{INT} or @code{STR} for floating point numbers,
-integer numbers, and strings (filenames for example) respectively. All
-options have a long format and some have a short format (a single
-character), for more see @ref{Options}.
+The options that need a value have an @key{=} sign after their long version
and @code{FLT}, @code{INT} or @code{STR} for floating point numbers, integer
numbers, and strings (filenames for example) respectively.
+All options have a long format and some have a short format (a single
character), for more see @ref{Options}.
-When you are using a program, it is often necessary to check the value the
-option has just before the program starts its processing. In other words,
-after it has parsed the command-line options and all configuration
-files. You can see the values of all options that need one with the
-@option{--printparams} or @code{-P} option. @option{--printparams} is
-common to all programs (see @ref{Common options}). In the command below,
-try replacing @code{-P} with @option{--printparams} to see how both do the
-same operation.
+When you are using a program, it is often necessary to check the value the
option has just before the program starts its processing.
+In other words, after it has parsed the command-line options and all
configuration files.
+You can see the values of all options that need one with the
@option{--printparams} or @code{-P} option.
+@option{--printparams} is common to all programs (see @ref{Common options}).
+In the command below, try replacing @code{-P} with @option{--printparams} to
see how both do the same operation.
@example
$ astcosmiccal -P
@end example
-Let's say you want a different Hubble constant. Try running the following
-command (just adding @option{--H0=70} after the command above) to see how
-the Hubble constant in the output of the command above has changed.
+Let's say you want a different Hubble constant.
+Try running the following command (just adding @option{--H0=70} after the
command above) to see how the Hubble constant in the output of the command
above has changed.
@example
$ astcosmiccal -P --H0=70
@@ -3074,12 +2420,9 @@ calculations with the new cosmology (or configuration).
$ astcosmiccal --H0=70 -z2
@end example
-From the output of the @code{--help} option, note how the option for Hubble
-constant has both short (@code{-H}) and long (@code{--H0}) formats. One
-final note is that the equal (@key{=}) sign is not mandatory. In the short
-format, the value can stick to the actual option (the short option name is
-just one character after-all, thus easily identifiable) and in the long
-format, a white-space character is also enough.
+From the output of the @code{--help} option, note how the option for Hubble
constant has both short (@code{-H}) and long (@code{--H0}) formats.
+One final note is that the equal (@key{=}) sign is not mandatory.
+In the short format, the value can stick to the actual option (the short
option name is just one character after-all, thus easily identifiable) and in
the long format, a white-space character is also enough.
@example
$ astcosmiccal -H70 -z2
@@ -3087,35 +2430,28 @@ $ astcosmiccal --H0 70 -z2 --arcsectandist
@end example
@noindent
-When an option dosn't need a value, and has a short format (like
-@option{--arcsectandist}), you can easily append it @emph{before} other
-short options. So the last command above can also be written as:
+When an option doesn't need a value, and has a short format (like
@option{--arcsectandist}), you can easily append it @emph{before} other short
options.
+So the last command above can also be written as:
@example
$ astcosmiccal --H0 70 -sz2
@end example
-Let's assume that in one project, you want to only use rounded cosmological
-parameters (H0 of 70km/s/Mpc and matter density of 0.3). You should
-therefore run CosmicCalculator like this:
+Let's assume that in one project, you want to only use rounded cosmological
parameters (H0 of 70km/s/Mpc and matter density of 0.3).
+You should therefore run CosmicCalculator like this:
@example
$ astcosmiccal --H0=70 --olambda=0.7 --omatter=0.3 -z2
@end example
-But having to type these extra options every time you run CosmicCalculator
-will be prone to errors (typos in particular), frustrating and
-slow. Therefore in Gnuastro, you can put all the options and their values
-in a ``Configuration file'' and tell the programs to read the option values
-from there.
+But having to type these extra options every time you run CosmicCalculator
will be prone to errors (typos in particular), frustrating and slow.
+Therefore in Gnuastro, you can put all the options and their values in a
``Configuration file'' and tell the programs to read the option values from
there.
-Let's create a configuration file... With your favorite text editor, make a
-file named @file{my-cosmology.conf} (or @file{my-cosmology.txt}, the suffix
-doesn't matter, but a more descriptive suffix like @file{.conf} is
-recommended). Then put the following lines inside of it. One space between
-the option value and name is enough, the values are just under each other
-to help in readability. Also note that you can only use long option names
-in configuration files.
+Let's create a configuration file...
+With your favorite text editor, make a file named @file{my-cosmology.conf} (or
@file{my-cosmology.txt}, the suffix doesn't matter, but a more descriptive
suffix like @file{.conf} is recommended).
+Then put the following lines inside of it.
+One space between the option value and name is enough, the values are just
under each other to help in readability.
+Also note that you can only use long option names in configuration files.
@example
H0 70
@@ -3124,31 +2460,23 @@ omatter 0.3
@end example
@noindent
-You can now tell CosmicCalculator to read this file for option values
-immediately using the @option{--config} option as shown below. Do you see
-how the output of the following command corresponds to the option values in
-@file{my-cosmology.conf}, and is therefore identical to the previous
-command?
+You can now tell CosmicCalculator to read this file for option values
immediately using the @option{--config} option as shown below.
+Do you see how the output of the following command corresponds to the option
values in @file{my-cosmology.conf}, and is therefore identical to the previous
command?
@example
$ astcosmiccal --config=my-cosmology.conf -z2
@end example
-But still, having to type @option{--config=my-cosmology.conf} everytime is
-annoying, isn't it? If you need this cosmology every time you are working
-in a specific directory, you can use Gnuastro's default configuration file
-names and avoid having to type it manually.
+But still, having to type @option{--config=my-cosmology.conf} every time is
annoying, isn't it?
+If you need this cosmology every time you are working in a specific directory,
you can use Gnuastro's default configuration file names and avoid having to
type it manually.
-The default configuration files (that are checked if they exist) must be
-placed in the hidden @file{.gnuastro} sub-directory (in the same directory
-you are running the program). Their file name (within @file{.gnuastro})
-must also be the same as the program's executable name. So in the case of
-CosmicCalculator, the default configuration file in a given directory is
-@file{.gnuastro/astcosmiccal.conf}.
+The default configuration files (that are checked if they exist) must be
placed in the hidden @file{.gnuastro} sub-directory (in the same directory you
are running the program).
+Their file name (within @file{.gnuastro}) must also be the same as the
program's executable name.
+So in the case of CosmicCalculator, the default configuration file in a given
directory is @file{.gnuastro/astcosmiccal.conf}.
-Let's do this. We'll first make a directory for our custom cosmology, then
-build a @file{.gnuastro} within it. Finally, we'll copy the custom
-configuration file there:
+Let's do this.
+We'll first make a directory for our custom cosmology, then build a
@file{.gnuastro} within it.
+Finally, we'll copy the custom configuration file there:
@example
$ mkdir my-cosmology
@@ -3156,9 +2484,7 @@ $ mkdir my-cosmology/.gnuastro
$ mv my-cosmology.conf my-cosmology/.gnuastro/astcosmiccal.conf
@end example
-Once you run CosmicCalculator within @file{my-cosmology} (as shown below),
-you will see how your custom cosmology has been implemented without having
-to type anything extra on the command-line.
+Once you run CosmicCalculator within @file{my-cosmology} (as shown below), you
will see how your custom cosmology has been implemented without having to type
anything extra on the command-line.
@example
$ cd my-cosmology
@@ -3166,11 +2492,9 @@ $ astcosmiccal -P
$ cd ..
@end example
-To further simplify the process, you can use the @option{--setdirconf}
-option. If you are already in your desired working directory, calling this
-option with the others will automatically write the final values (along
-with descriptions) in @file{.gnuastro/astcosmiccal.conf}. For example try
-the commands below:
+To further simplify the process, you can use the @option{--setdirconf} option.
+If you are already in your desired working directory, calling this option with
the others will automatically write the final values (along with descriptions)
in @file{.gnuastro/astcosmiccal.conf}.
+For example try the commands below:
@example
$ mkdir my-cosmology2
@@ -3181,18 +2505,14 @@ $ astcosmiccal -P
$ cd ..
@end example
-Gnuastro's programs also have default configuration files for a specific
-user (when run in any directory). This allows you to set a special behavior
-every time a program is run by a specific user. Only the directory and
-filename differ from the above, the rest of the process is similar to
-before. Finally, there are also system-wide configuration files that can be
-used to define the option values for all users on a system. See
-@ref{Configuration file precedence} for a more detailed discussion.
+Gnuastro's programs also have default configuration files for a specific user
(when run in any directory).
+This allows you to set a special behavior every time a program is run by a
specific user.
+Only the directory and filename differ from the above, the rest of the process
is similar to before.
+Finally, there are also system-wide configuration files that can be used to
define the option values for all users on a system.
+See @ref{Configuration file precedence} for a more detailed discussion.
-We'll stop the discussion on configuration files here, but you can always
-read about them in @ref{Configuration files}. Before continuing the
-tutorial, let's delete the two extra directories that we don't need any
-more:
+We'll stop the discussion on configuration files here, but you can always read
about them in @ref{Configuration files}.
+Before continuing the tutorial, let's delete the two extra directories that we
don't need any more:
@example
$ rm -rf my-cosmology*
@@ -3201,38 +2521,30 @@ $ rm -rf my-cosmology*
@node Warping to a new pixel grid, Multiextension FITS files NoiseChisel's
output, Option management and configuration files, General program usage
tutorial
@subsection Warping to a new pixel grid
-We are now ready to start processing the downloaded images. The XDF
-datasets we are using here are already aligned to the same pixel
-grid. However, warping to a different/matched pixel grid is commonly needed
-before higher-level analysis when you are using datasets from different
-instruments. So let's have a look at Gnuastro's features warping features
-here.
+We are now ready to start processing the downloaded images.
+The XDF datasets we are using here are already aligned to the same pixel grid.
+However, warping to a different/matched pixel grid is commonly needed before
higher-level analysis when you are using datasets from different instruments.
+So let's have a look at Gnuastro's features warping features here.
-Gnuastro's Warp program should be used for warping the pixel-grid (see
-@ref{Warp}). For example, try rotating one of the images by 20 degrees:
+Gnuastro's Warp program should be used for warping the pixel-grid (see
@ref{Warp}).
+For example, try rotating one of the images by 20 degrees:
@example
$ astwarp flat-ir/xdf-f160w.fits --rotate=20
@end example
@noindent
-Open the output (@file{xdf-f160w_rotated.fits}) and see how it is
-rotated. If your final image is already aligned with RA and Dec, you can
-simply use the @option{--align} option and let Warp calculate the necessary
-rotation and apply it. For example, try aligning the rotated image back to
-the standard orientation (just note that because of the two rotations, the
-NaN parts of the image are larger now):
+Open the output (@file{xdf-f160w_rotated.fits}) and see how it is rotated.
+If your final image is already aligned with RA and Dec, you can simply use the
@option{--align} option and let Warp calculate the necessary rotation and apply
it.
+For example, try aligning the rotated image back to the standard orientation
(just note that because of the two rotations, the NaN parts of the image are
larger now):
@example
$ astwarp xdf-f160w_rotated.fits --align
@end example
-Warp can generally be used for many kinds of pixel grid manipulation
-(warping), not just rotations. For example the outputs of the commands
-below will respectively have larger pixels (new resolution being one
-quarter the original resolution), get shifted by 2.8 (by sub-pixel), get a
-shear of 2, and be tilted (projected). Run each of them and open the output
-file to see the effect, they will become handy for you in the future.
+Warp can generally be used for many kinds of pixel grid manipulation
(warping), not just rotations.
+For example the outputs of the commands below will respectively have larger
pixels (new resolution being one quarter the original resolution), get shifted
by 2.8 (by sub-pixel), get a shear of 2, and be tilted (projected).
+Run each of them and open the output file to see the effect, they will become
handy for you in the future.
@example
$ astwarp flat-ir/xdf-f160w.fits --scale=0.25
@@ -3242,34 +2554,24 @@ $ astwarp flat-ir/xdf-f160w.fits --project=0.001,0.0005
@end example
@noindent
-If you need to do multiple warps, you can combine them in one call to
-Warp. For example to first rotate the image, then scale it, run this
-command:
+If you need to do multiple warps, you can combine them in one call to Warp.
+For example to first rotate the image, then scale it, run this command:
@example
$ astwarp flat-ir/xdf-f160w.fits --rotate=20 --scale=0.25
@end example
-If you have multiple warps, do them all in one command. Don't warp them in
-separate commands because the correlated noise will become too strong. As
-you see in the matrix that is printed when you run Warp, it merges all the
-warps into a single warping matrix (see @ref{Merging multiple warpings})
-and simply applies that (mixes the pixel values) just once. However, if you
-run Warp multiple times, the pixels will be mixed multiple times, creating
-a strong artificial blur/smoothing, or stronger correlated noise.
+If you have multiple warps, do them all in one command.
+Don't warp them in separate commands because the correlated noise will become
too strong.
+As you see in the matrix that is printed when you run Warp, it merges all the
warps into a single warping matrix (see @ref{Merging multiple warpings}) and
simply applies that (mixes the pixel values) just once.
+However, if you run Warp multiple times, the pixels will be mixed multiple
times, creating a strong artificial blur/smoothing, or stronger correlated
noise.
-Recall that the merging of multiple warps is done through matrix
-multiplication, therefore order matters in the separate operations. At a
-lower level, through Warp's @option{--matrix} option, you can directly
-request your desired final warp and don't have to break it up into
-different warps like above (see @ref{Invoking astwarp}).
+Recall that the merging of multiple warps is done through matrix
multiplication, therefore order matters in the separate operations.
+At a lower level, through Warp's @option{--matrix} option, you can directly
request your desired final warp and don't have to break it up into different
warps like above (see @ref{Invoking astwarp}).
-Fortunately these datasets are already aligned to the same pixel grid, so
-you don't actually need the files that were just generated. You can safely
-delete them all with the following command. Here, you see why we put the
-processed outputs that we need later into a separate directory. In this
-way, the top directory can be used for temporary files for testing that you
-can simply delete with a generic command like below.
+Fortunately these datasets are already aligned to the same pixel grid, so you
don't actually need the files that were just generated.You can safely delete
them all with the following command.
+Here, you see why we put the processed outputs that we need later into a
separate directory.
+In this way, the top directory can be used for temporary files for testing
that you can simply delete with a generic command like below.
@example
$ rm *.fits
@@ -3278,80 +2580,51 @@ $ rm *.fits
@node Multiextension FITS files NoiseChisel's output, NoiseChisel optimization
for detection, Warping to a new pixel grid, General program usage tutorial
@subsection Multiextension FITS files (NoiseChisel's output)
-Having completed a review of the basics in the previous sections, we are
-now ready to separate the signal (galaxies or stars) from the background
-noise in the image. We will be using the results of @ref{Dataset inspection
-and cropping}, so be sure you already have them. Gnuastro has NoiseChisel
-for this job. But NoiseChisel's output is a multi-extension FITS file,
-therefore to better understand how to use NoiseChisel, let's take a look at
-multi-extension FITS files and how you can interact with them.
+Having completed a review of the basics in the previous sections, we are now
ready to separate the signal (galaxies or stars) from the background noise in
the image.
+We will be using the results of @ref{Dataset inspection and cropping}, so be
sure you already have them.
+Gnuastro has NoiseChisel for this job.
+But NoiseChisel's output is a multi-extension FITS file, therefore to better
understand how to use NoiseChisel, let's take a look at multi-extension FITS
files and how you can interact with them.
-In the FITS format, each extension contains a separate dataset (image in
-this case). You can get basic information about the extensions in a FITS
-file with Gnuastro's Fits program (see @ref{Fits}). To start with, let's
-run NoiseChisel without any options, then use Gnuastro's FITS program to
-inspect the number of extensions in this file.
+In the FITS format, each extension contains a separate dataset (image in this
case).
+You can get basic information about the extensions in a FITS file with
Gnuastro's Fits program (see @ref{Fits}).
+To start with, let's run NoiseChisel without any options, then use Gnuastro's
FITS program to inspect the number of extensions in this file.
@example
$ astnoisechisel flat-ir/xdf-f160w.fits
$ astfits xdf-f160w_detected.fits
@end example
-From the output list, we see that NoiseChisel's output contains 5
-extensions and the first (counting from zero, with name
-@code{NOISECHISEL-CONFIG}) is empty: it has value of @code{0} in the last
-column (which shows its size). The first extension in all the outputs of
-Gnuastro's programs only contains meta-data: data about/describing the
-datasets within (all) the output's extensions. This is recommended by the
-FITS standard, see @ref{Fits} for more. In the case of Gnuastro's programs,
-this generic zero-th/meta-data extension (for the whole file) contains all
-the configuration options of the program that created the file.
-
-The second extension of NoiseChisel's output (numbered 1, named
-@code{INPUT-NO-SKY}) is the Sky-subtracted input that you provided. The
-third (@code{DETECTIONS}) is NoiseChisel's main output which is a binary
-image with only two possible values for all pixels: 0 for noise and 1 for
-signal. Since it only has two values, to avoid taking too much space on
-your computer, its numeric datatype an unsigned 8-bit integer (or
-@code{uint8})@footnote{To learn more about numeric data types see
-@ref{Numeric data types}.}. The fourth and fifth (@code{SKY} and
-@code{SKY_STD}) extensions, have the Sky and its standard deviation values
-for the input on a tile grid and were calculated over the undetected
-regions (for more on the importance of the Sky value, see @ref{Sky value}).
-
-Metadata regarding how the analysis was done (or a dataset was created) is
-very important for higher-level analysis and reproducibility. Therefore,
-Let's first take a closer look at the @code{NOISECHISEL-CONFIG}
-extension. If you specify a special header in the FITS file, Gnuastro's
-Fits program will print the header keywords (metadata) of that
-extension. You can either specify the HDU/extension counter (starting from
-0), or name. Therefore, the two commands below are identical for this file:
+From the output list, we see that NoiseChisel's output contains 5 extensions
and the first (counting from zero, with name @code{NOISECHISEL-CONFIG}) is
empty: it has value of @code{0} in the last column (which shows its size).
+The first extension in all the outputs of Gnuastro's programs only contains
meta-data: data about/describing the datasets within (all) the output's
extensions.
+This is recommended by the FITS standard, see @ref{Fits} for more.
+In the case of Gnuastro's programs, this generic zero-th/meta-data extension
(for the whole file) contains all the configuration options of the program that
created the file.
+
+The second extension of NoiseChisel's output (numbered 1, named
@code{INPUT-NO-SKY}) is the Sky-subtracted input that you provided.
+The third (@code{DETECTIONS}) is NoiseChisel's main output which is a binary
image with only two possible values for all pixels: 0 for noise and 1 for
signal.
+Since it only has two values, to avoid taking too much space on your computer,
its numeric datatype an unsigned 8-bit integer (or @code{uint8})@footnote{To
learn more about numeric data types see @ref{Numeric data types}.}.
+The fourth and fifth (@code{SKY} and @code{SKY_STD}) extensions, have the Sky
and its standard deviation values for the input on a tile grid and were
calculated over the undetected regions (for more on the importance of the Sky
value, see @ref{Sky value}).
+
+Metadata regarding how the analysis was done (or a dataset was created) is
very important for higher-level analysis and reproducibility.
+Therefore, Let's first take a closer look at the @code{NOISECHISEL-CONFIG}
extension.
+If you specify a special header in the FITS file, Gnuastro's Fits program will
print the header keywords (metadata) of that extension.
+You can either specify the HDU/extension counter (starting from 0), or name.
+Therefore, the two commands below are identical for this file:
@example
$ astfits xdf-f160w_detected.fits -h0
$ astfits xdf-f160w_detected.fits -hNOISECHISEL-CONFIG
@end example
-The first group of FITS header keywords are standard keywords (containing
-the @code{SIMPLE} and @code{BITPIX} keywords the first empty line). They
-are required by the FITS standard and must be present in any FITS
-extension. The second group contains the input file and all the options
-with their values in that run of NoiseChisel. Finally, the last group
-contains the date and version information of Gnuastro and its
-dependencies. The ``versions and date'' group of keywords are present in
-all Gnuastro's FITS extension outputs, for more see @ref{Output FITS
-files}.
-
-Note that if a keyword name is larger than 8 characters, it is preceded by
-a @code{HIERARCH} keyword and that all keyword names are in capital
-letters. Therefore, if you want to see only one keyword's value by feeding
-the output to Grep, you should ask Grep to ignore case with its @option{-i}
-option (short name for @option{--ignore-case}). For example, below we'll
-check the value to the @option{--snminarea} option, note how we don't need
-Grep's @option{-i} option when it is fed with @command{astnoisechisel -P}
-since it is already in small-caps there. The extra white spaces in the
-first command are only to help in readability, you can ignore them when
-typing.
+The first group of FITS header keywords are standard keywords (containing the
@code{SIMPLE} and @code{BITPIX} keywords the first empty line).
+They are required by the FITS standard and must be present in any FITS
extension.
+The second group contains the input file and all the options with their values
in that run of NoiseChisel.
+Finally, the last group contains the date and version information of Gnuastro
and its dependencies.
+The ``versions and date'' group of keywords are present in all Gnuastro's FITS
extension outputs, for more see @ref{Output FITS files}.
+
+Note that if a keyword name is larger than 8 characters, it is preceded by a
@code{HIERARCH} keyword and that all keyword names are in capital letters.
+Therefore, if you want to see only one keyword's value by feeding the output
to Grep, you should ask Grep to ignore case with its @option{-i} option (short
name for @option{--ignore-case}).
+For example, below we'll check the value to the @option{--snminarea} option,
note how we don't need Grep's @option{-i} option when it is fed with
@command{astnoisechisel -P} since it is already in small-caps there.
+The extra white spaces in the first command are only to help in readability,
you can ignore them when typing.
@example
$ astnoisechisel -P | grep snminarea
@@ -3359,78 +2632,51 @@ $ astfits xdf-f160w_detected.fits -h0 | grep -i
snminarea
@end example
@noindent
-The metadata (that is stored in the output) can later be used to exactly
-reproduce/understand your result, even if you have lost/forgot the command
-you used to create the file. This feature is present in all of Gnuastro's
-programs, not just NoiseChisel.
+The metadata (that is stored in the output) can later be used to exactly
reproduce/understand your result, even if you have lost/forgot the command you
used to create the file.
+This feature is present in all of Gnuastro's programs, not just NoiseChisel.
@cindex DS9
@cindex GNOME
@cindex SAO DS9
-Let's continue with the extensions in NoiseChisel's output that contain a
-dataset by visually inspecting them (here, we'll use SAO DS9). Since the
-file contains multiple related extensions, the easiest way to view all of
-them in DS9 is to open the file as a ``Multi-extension data cube'' with the
-@option{-mecube} option as shown below@footnote{You can configure your
-graphic user interface to open DS9 in multi-extension cube mode by default
-when using the GUI (double clicking on the file). If your graphic user
-interface is GNOME (another GNU software, it is most common in GNU/Linux
-operating systems), a full description is given in @ref{Viewing
-multiextension FITS images}}.
+Let's continue with the extensions in NoiseChisel's output that contain a
dataset by visually inspecting them (here, we'll use SAO DS9).
+Since the file contains multiple related extensions, the easiest way to view
all of them in DS9 is to open the file as a ``Multi-extension data cube'' with
the @option{-mecube} option as shown below@footnote{You can configure your
graphic user interface to open DS9 in multi-extension cube mode by default when
using the GUI (double clicking on the file).
+If your graphic user interface is GNOME (another GNU software, it is most
common in GNU/Linux operating systems), a full description is given in
@ref{Viewing multiextension FITS images}}.
@example
$ ds9 -mecube xdf-f160w_detected.fits -zscale -zoom to fit
@end example
-A ``cube'' window opens along with DS9's main window. The buttons and
-horizontal scroll bar in this small new window can be used to navigate
-between the extensions. In this mode, all DS9's settings (for example zoom
-or color-bar) will be identical between the extensions. Try zooming into to
-one part and flipping through the extensions to see how the galaxies were
-detected along with the Sky and Sky standard deviation values for that
-region. Just have in mind that NoiseChisel's job is @emph{only} detection
-(separating signal from noise), We'll do segmentation on this result later
-to find the individual galaxies/peaks over the detected pixels.
+A ``cube'' window opens along with DS9's main window.
+The buttons and horizontal scroll bar in this small new window can be used to
navigate between the extensions.
+In this mode, all DS9's settings (for example zoom or color-bar) will be
identical between the extensions.
+Try zooming into to one part and flipping through the extensions to see how
the galaxies were detected along with the Sky and Sky standard deviation values
for that region.
+Just have in mind that NoiseChisel's job is @emph{only} detection (separating
signal from noise), We'll do segmentation on this result later to find the
individual galaxies/peaks over the detected pixels.
-Each HDU/extension in a FITS file is an independent dataset (image or
-table) which you can delete from the FITS file, or copy/cut to another
-file. For example, with the command below, you can copy NoiseChisel's
-@code{DETECTIONS} HDU/extension to another file:
+Each HDU/extension in a FITS file is an independent dataset (image or table)
which you can delete from the FITS file, or copy/cut to another file.
+For example, with the command below, you can copy NoiseChisel's
@code{DETECTIONS} HDU/extension to another file:
@example
$ astfits xdf-f160w_detected.fits --copy=DETECTIONS -odetections.fits
@end example
-There are similar options to conveniently cut (@option{--cut}, copy, then
-remove from the input) or delete (@option{--remove}) HDUs from a FITS file
-also. See @ref{HDU manipulation} for more.
+There are similar options to conveniently cut (@option{--cut}, copy, then
remove from the input) or delete (@option{--remove}) HDUs from a FITS file also.
+See @ref{HDU manipulation} for more.
@node NoiseChisel optimization for detection, NoiseChisel optimization for
storage, Multiextension FITS files NoiseChisel's output, General program usage
tutorial
@subsection NoiseChisel optimization for detection
-In @ref{Multiextension FITS files NoiseChisel's output}, we ran NoiseChisel
-and reviewed NoiseChisel's output format. Now that you have a better
-feeling for multi-extension FITS files, let's optimize NoiseChisel for this
-particular dataset.
-
-One good way to see if you have missed any signal (small galaxies, or the
-wings of brighter galaxies) is to mask all the detected pixels and inspect
-the noise pixels. For this, you can use Gnuastro's Arithmetic program (in
-particular its @code{where} operator, see @ref{Arithmetic operators}). The
-command below will produce @file{mask-det.fits}. In it, all the pixels in
-the @code{INPUT-NO-SKY} extension that are flagged 1 in the
-@code{DETECTIONS} extension (dominated by signal, not noise) will be set to
-NaN.
-
-Since the various extensions are in the same file, for each dataset we need
-the file and extension name. To make the command easier to
-read/write/understand, let's use shell variables: `@code{in}' will be used
-for the Sky-subtracted input image and `@code{det}' will be used for the
-detection map. Recall that a shell variable's value can be retrieved by
-adding a @code{$} before its name, also note that the double quotations are
-necessary when we have white-space characters in a variable name (like this
-case).
+In @ref{Multiextension FITS files NoiseChisel's output}, we ran NoiseChisel
and reviewed NoiseChisel's output format.
+Now that you have a better feeling for multi-extension FITS files, let's
optimize NoiseChisel for this particular dataset.
+
+One good way to see if you have missed any signal (small galaxies, or the
wings of brighter galaxies) is to mask all the detected pixels and inspect the
noise pixels.
+For this, you can use Gnuastro's Arithmetic program (in particular its
@code{where} operator, see @ref{Arithmetic operators}).
+The command below will produce @file{mask-det.fits}.
+In it, all the pixels in the @code{INPUT-NO-SKY} extension that are flagged 1
in the @code{DETECTIONS} extension (dominated by signal, not noise) will be set
to NaN.
+
+Since the various extensions are in the same file, for each dataset we need
the file and extension name.
+To make the command easier to read/write/understand, let's use shell
variables: `@code{in}' will be used for the Sky-subtracted input image and
`@code{det}' will be used for the detection map.
+Recall that a shell variable's value can be retrieved by adding a @code{$}
before its name, also note that the double quotations are necessary when we
have white-space characters in a variable name (like this case).
@example
$ in="xdf-f160w_detected.fits -hINPUT-NO-SKY"
@@ -3439,187 +2685,130 @@ $ astarithmetic $in $det nan where
--output=mask-det.fits
@end example
@noindent
-To invert the result (only keep the detected pixels), you can flip the
-detection map (from 0 to 1 and vice-versa) by adding a `@code{not}' after
-the second @code{$det}:
+To invert the result (only keep the detected pixels), you can flip the
detection map (from 0 to 1 and vice-versa) by adding a `@code{not}' after the
second @code{$det}:
@example
$ astarithmetic $in $det not nan where --output=mask-sky.fits
@end example
-Looking again at the detected pixels, we see that there are thin
-connections between many of the smaller objects or extending from larger
-objects. This shows that we have dug in too deep, and that we are following
-correlated noise.
-
-Correlated noise is created when we warp datasets from individual exposures
-(that are each slightly offset compared to each other) into the same pixel
-grid, then add them to form the final result. Because it mixes nearby pixel
-values, correlated noise is a form of convolution and it smooths the
-image. In terms of the number of exposures (and thus correlated noise), the
-XDF dataset is by no means an ordinary dataset. It is the result of warping
-and adding roughly 80 separate exposures which can create strong correlated
-noise/smoothing. In common surveys the number of exposures is usually 10 or
-less.
-
-Let's tweak NoiseChisel's configuration a little to get a better result on
-this dataset. Don't forget that ``@emph{Good statistical analysis is not a
-purely routine matter, and generally calls for more than one pass through
-the computer}'' (Anscombe 1973, see @ref{Science and its tools}). A good
-scientist must have a good understanding of her tools to make a meaningful
-analysis. So don't hesitate in playing with the default configuration and
-reviewing the manual when you have a new dataset in front of you. Robust
-data analysis is an art, therefore a good scientist must first be a good
-artist.
-
-NoiseChisel can produce ``Check images'' to help you visualize and inspect
-how each step is done. You can see all the check images it can produce with
-this command.
+Looking again at the detected pixels, we see that there are thin connections
between many of the smaller objects or extending from larger objects.
+This shows that we have dug in too deep, and that we are following correlated
noise.
+
+Correlated noise is created when we warp datasets from individual exposures
(that are each slightly offset compared to each other) into the same pixel
grid, then add them to form the final result.
+Because it mixes nearby pixel values, correlated noise is a form of
convolution and it smooths the image.
+In terms of the number of exposures (and thus correlated noise), the XDF
dataset is by no means an ordinary dataset.
+It is the result of warping and adding roughly 80 separate exposures which can
create strong correlated noise/smoothing.
+In common surveys the number of exposures is usually 10 or less.
+
+Let's tweak NoiseChisel's configuration a little to get a better result on
this dataset.
+Don't forget that ``@emph{Good statistical analysis is not a purely routine
matter, and generally calls for more than one pass through the computer}''
(Anscombe 1973, see @ref{Science and its tools}).
+A good scientist must have a good understanding of her tools to make a
meaningful analysis.
+So don't hesitate in playing with the default configuration and reviewing the
manual when you have a new dataset in front of you.
+Robust data analysis is an art, therefore a good scientist must first be a
good artist.
+
+NoiseChisel can produce ``Check images'' to help you visualize and inspect how
each step is done.
+You can see all the check images it can produce with this command.
@example
$ astnoisechisel --help | grep check
@end example
-Let's check the overall detection process to get a better feeling of what
-NoiseChisel is doing with the following command. To learn the details of
-NoiseChisel in more detail, please see
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}. Also
-see @ref{NoiseChisel changes after publication}.
+Let's check the overall detection process to get a better feeling of what
NoiseChisel is doing with the following command.
+To learn the details of NoiseChisel in more detail, please see
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
+Also see @ref{NoiseChisel changes after publication}.
@example
$ astnoisechisel flat-ir/xdf-f160w.fits --checkdetection
@end example
-The check images/tables are also multi-extension FITS files. As you saw
-from the command above, when check datasets are requested, NoiseChisel
-won't go to the end. It will abort as soon as all the extensions of the
-check image are ready. Please list the extensions of the output with
-@command{astfits} and then opening it with @command{ds9} as we done
-above. If you have read the paper, you will see why there are so many
-extensions in the check image.
+The check images/tables are also multi-extension FITS files.
+As you saw from the command above, when check datasets are requested,
NoiseChisel won't go to the end.
+It will abort as soon as all the extensions of the check image are ready.
+Please list the extensions of the output with @command{astfits} and then
opening it with @command{ds9} as we done above.
+If you have read the paper, you will see why there are so many extensions in
the check image.
@example
$ astfits xdf-f160w_detcheck.fits
$ ds9 -mecube xdf-f160w_detcheck.fits -zscale -zoom to fit
@end example
-In order to understand the parameters and their biases (especially as you
-are starting to use Gnuastro, or running it a new dataset), it is
-@emph{strongly} encouraged to play with the different parameters and use
-the respective check images to see which step is affected by your changes
-and how, for example see @ref{Detecting large extended targets}.
+In order to understand the parameters and their biases (especially as you are
starting to use Gnuastro, or running it a new dataset), it is @emph{strongly}
encouraged to play with the different parameters and use the respective check
images to see which step is affected by your changes and how, for example see
@ref{Detecting large extended targets}.
@cindex FWHM
-The @code{OPENED_AND_LABELED} extension shows the initial detection step of
-NoiseChisel. We see these thin connections between smaller points are
-already present here (a relatively early stage in the processing). Such
-connections at the lowest surface brightness limits usually occur when the
-dataset is too smoothed. Because of correlated noise, the dataset is
-already artificially smoothed, therefore further smoothing it with the
-default kernel may be the problem. One solution is thus to use a sharper
-kernel (NoiseChisel's first step in its processing).
-
-By default NoiseChisel uses a Gaussian with full-width-half-maximum (FWHM)
-of 2 pixels. We can use Gnuastro's MakeProfiles to build a kernel with FWHM
-of 1.5 pixel (truncated at 5 times the FWHM, like the default) using the
-following command. MakeProfiles is a powerful tool to build any number of
-mock profiles on one image or independently, to learn more of its features
-and capabilities, see @ref{MakeProfiles}.
+The @code{OPENED_AND_LABELED} extension shows the initial detection step of
NoiseChisel.
+We see these thin connections between smaller points are already present here
(a relatively early stage in the processing).
+Such connections at the lowest surface brightness limits usually occur when
the dataset is too smoothed.
+Because of correlated noise, the dataset is already artificially smoothed,
therefore further smoothing it with the default kernel may be the problem.
+One solution is thus to use a sharper kernel (NoiseChisel's first step in its
processing).
+
+By default NoiseChisel uses a Gaussian with full-width-half-maximum (FWHM) of
2 pixels.
+We can use Gnuastro's MakeProfiles to build a kernel with FWHM of 1.5 pixel
(truncated at 5 times the FWHM, like the default) using the following command.
+MakeProfiles is a powerful tool to build any number of mock profiles on one
image or independently, to learn more of its features and capabilities, see
@ref{MakeProfiles}.
@example
$ astmkprof --kernel=gaussian,1.5,5 --oversample=1
@end example
@noindent
-Please open the output @file{kernel.fits} and have a look (it is very small
-and sharp). We can now tell NoiseChisel to use this instead of the default
-kernel with the following command (we'll keep checking the detection steps)
+Please open the output @file{kernel.fits} and have a look (it is very small
and sharp).
+We can now tell NoiseChisel to use this instead of the default kernel with the
following command (we'll keep checking the detection steps)
@example
$ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits \
--checkdetection
@end example
-Looking at the @code{OPENED_AND_LABELED} extension, we see that the thin
-connections between smaller peaks has now significantly decreased. Going
-two extensions/steps ahead (in the first @code{HOLES-FILLED}), you can see
-that during the process of finding false pseudo-detections, too many holes
-have been filled: do you see how the many of the brighter galaxies are
-connected? At this stage all holes are filled, irrespective of their size.
+Looking at the @code{OPENED_AND_LABELED} extension, we see that the thin
connections between smaller peaks has now significantly decreased.
+Going two extensions/steps ahead (in the first @code{HOLES-FILLED}), you can
see that during the process of finding false pseudo-detections, too many holes
have been filled: do you see how the many of the brighter galaxies are
connected? At this stage all holes are filled, irrespective of their size.
-Try looking two extensions ahead (in the first @code{PSEUDOS-FOR-SN}), you
-can see that there aren't too many pseudo-detections because of all those
-extended filled holes. If you look closely, you can see the number of
-pseudo-detections in the result NoiseChisel prints (around 5000). This is
-another side-effect of correlated noise. To address it, we should slightly
-increase the pseudo-detection threshold (before changing
-@option{--dthresh}, run with @option{-P} to see the default value):
+Try looking two extensions ahead (in the first @code{PSEUDOS-FOR-SN}), you can
see that there aren't too many pseudo-detections because of all those extended
filled holes.
+If you look closely, you can see the number of pseudo-detections in the result
NoiseChisel prints (around 5000).
+This is another side-effect of correlated noise.
+To address it, we should slightly increase the pseudo-detection threshold
(before changing @option{--dthresh}, run with @option{-P} to see the default
value):
@example
$ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits \
--dthresh=0.1 --checkdetection
@end example
-Before visually inspecting the check image, you can already see the effect
-of this change in NoiseChisel's command-line output: notice how the number
-of pseudos has increased to more than 6000. Open the check image now and
-have a look, you can see how the pseudo-detections are distributed much
-more evenly in the image.
+Before visually inspecting the check image, you can already see the effect of
this change in NoiseChisel's command-line output: notice how the number of
pseudos has increased to more than 6000.
+Open the check image now and have a look, you can see how the
pseudo-detections are distributed much more evenly in the image.
@cartouche
@noindent
-@strong{Maximize the number of pseudo-detecitons:} For a new noise-pattern
-(different instrument), play with @code{--dthresh} until you get a maximal
-number of pseudo-detections (the total number of pseudo-detections is
-printed on the command-line when you run NoiseChisel).
+@strong{Maximize the number of pseudo-detections:} For a new noise-pattern
(different instrument), play with @code{--dthresh} until you get a maximal
number of pseudo-detections (the total number of pseudo-detections is printed
on the command-line when you run NoiseChisel).
@end cartouche
-The signal-to-noise ratio of pseudo-detections define NoiseChisel's
-reference for removing false detections, so they are very important to get
-right. Let's have a look at their signal-to-noise distribution with
-@option{--checksn}.
+The signal-to-noise ratio of pseudo-detections define NoiseChisel's reference
for removing false detections, so they are very important to get right.
+Let's have a look at their signal-to-noise distribution with
@option{--checksn}.
@example
$ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits \
--dthresh=0.1 --checkdetection --checksn
@end example
-The output (@file{xdf-f160w_detsn.fits}) contains two extensions for the
-pseudo-detections over the undetected (sky) regions and those over
-detections. The first column is the pseudo-detection label which you can
-see in the respective@footnote{The first @code{PSEUDOS-FOR-SN} in
-@file{xdf-f160w_detsn.fits} is for the pseudo-detections over the
-undetected regions and the second is for those over detected regions.}
-@code{PSEUDOS-FOR-SN} extension of @file{xdf-f160w_detcheck.fits}. You can
-see the table columns with the first command below and get a feeling for
-its distribution with the second command (the two Table and Statistics
-programs will be discussed later in the tutorial)
+The output (@file{xdf-f160w_detsn.fits}) contains two extensions for the
pseudo-detections over the undetected (sky) regions and those over detections.
+The first column is the pseudo-detection label which you can see in the
respective@footnote{The first @code{PSEUDOS-FOR-SN} in
@file{xdf-f160w_detsn.fits} is for the pseudo-detections over the undetected
regions and the second is for those over detected regions.}
@code{PSEUDOS-FOR-SN} extension of @file{xdf-f160w_detcheck.fits}.
+You can see the table columns with the first command below and get a feeling
for its distribution with the second command (the two Table and Statistics
programs will be discussed later in the tutorial)
@example
$ asttable xdf-f160w_detsn.fits -hSKY_PSEUDODET_SN
$ aststatistics xdf-f160w_detsn.fits -hSKY_PSEUDODET_SN -c2
@end example
-The correlated noise is again visible in this pseudo-detection
-signal-to-noise distribution: it is highly skewed. A small change in the
-quantile will translate into a big change in the S/N value. For example see
-the difference between the three 0.99, 0.95 and 0.90 quantiles with this
-command:
+The correlated noise is again visible in this pseudo-detection signal-to-noise
distribution: it is highly skewed.
+A small change in the quantile will translate into a big change in the S/N
value.
+For example see the difference between the three 0.99, 0.95 and 0.90 quantiles
with this command:
@example
$ aststatistics xdf-f160w_detsn.fits -hSKY_PSEUDODET_SN -c2 \
--quantile=0.99 --quantile=0.95 --quantile=0.90
@end example
-If you run NoiseChisel with @option{-P}, you'll see the default
-signal-to-noise quantile @option{--snquant} is 0.99. In effect with this
-option you specify the purity level you want (contamination by false
-detections). With the @command{aststatistics} command above, you see that a
-small number of extra false detections (impurity) in the final result
-causes a big change in completeness (you can detect more lower
-signal-to-noise true detections). So let's loosen-up our desired purity
-level, remove the check-image options, and then mask the detected pixels
-like before to see if we have missed anything.
+If you run NoiseChisel with @option{-P}, you'll see the default
signal-to-noise quantile @option{--snquant} is 0.99.
+In effect with this option you specify the purity level you want
(contamination by false detections).
+With the @command{aststatistics} command above, you see that a small number of
extra false detections (impurity) in the final result causes a big change in
completeness (you can detect more lower signal-to-noise true detections).
+So let's loosen-up our desired purity level, remove the check-image options,
and then mask the detected pixels like before to see if we have missed anything.
@example
$ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits \
@@ -3629,34 +2818,26 @@ $ det="xdf-f160w_detected.fits -hDETECTIONS"
$ astarithmetic $in $det nan where --output=mask-det.fits
@end example
-Overall it seems good, but if you play a little with the color-bar and look
-closer in the noise, you'll see a few very sharp, but faint, objects that
-have not been detected. This only happens for under-sampled datasets like
-HST (where the pixel size is larger than the point spread function
-FWHM). So this won't happen on ground-based images. Because of this, sharp
-and faint objects will be very small and eroded too easily during
-NoiseChisel's erosion step.
+Overall it seems good, but if you play a little with the color-bar and look
closer in the noise, you'll see a few very sharp, but faint, objects that have
not been detected.
+This only happens for under-sampled datasets like HST (where the pixel size is
larger than the point spread function FWHM).
+So this won't happen on ground-based images.
+Because of this, sharp and faint objects will be very small and eroded too
easily during NoiseChisel's erosion step.
-To address this problem of sharp objects, we can use NoiseChisel's
-@option{--noerodequant} option. All pixels above this quantile will not be
-eroded, thus allowing us to preserve faint and sharp objects. Check its
-default value, then run NoiseChisel like below and make the mask again. You
-will see many of those sharp objects are now detected.
+To address this problem of sharp objects, we can use NoiseChisel's
@option{--noerodequant} option.
+All pixels above this quantile will not be eroded, thus allowing us to
preserve faint and sharp objects.
+Check its default value, then run NoiseChisel like below and make the mask
again.
+You will see many of those sharp objects are now detected.
@example
$ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits \
--noerodequant=0.95 --dthresh=0.1 --snquant=0.95
@end example
-This seems to be fine and we can continue with our analysis. To avoid
-having to write these options on every call to NoiseChisel, we'll just make
-a configuration file in a visible @file{config} directory. Then we'll
-define the hidden @file{.gnuastro} directory (that all Gnuastro's programs
-will look into for configuration files) as a symbolic link to the
-@file{config} directory. Finally, we'll write the finalized values of the
-options into NoiseChisel's standard configuration file within that
-directory. We'll also put the kernel in a separate directory to keep the
-top directory clean of any files we later need.
+This seems to be fine and we can continue with our analysis.
+To avoid having to write these options on every call to NoiseChisel, we'll
just make a configuration file in a visible @file{config} directory.
+Then we'll define the hidden @file{.gnuastro} directory (that all Gnuastro's
programs will look into for configuration files) as a symbolic link to the
@file{config} directory.
+Finally, we'll write the finalized values of the options into NoiseChisel's
standard configuration file within that directory.
+We'll also put the kernel in a separate directory to keep the top directory
clean of any files we later need.
@example
$ mkdir kernel config
@@ -3682,93 +2863,68 @@ $ astnoisechisel flat-ir/xdf-f105w.fits
--output=nc/xdf-f105w.fits
@node NoiseChisel optimization for storage, Segmentation and making a catalog,
NoiseChisel optimization for detection, General program usage tutorial
@subsection NoiseChisel optimization for storage
-As we showed before (in @ref{Multiextension FITS files NoiseChisel's
-output}), NoiseChisel's output is a multi-extension FITS file with several
-images the same size as the input. As the input datasets get larger this
-output can become hard to manage and waste a lot of storage
-space. Fortunately there is a solution to this problem (which is also
-useful for Segment's outputs). But first, let's have a look at the volume
-of NoiseChisel's output from @ref{NoiseChisel optimization for detection}
-(fast answer, its larger than 100 mega-bytes):
+As we showed before (in @ref{Multiextension FITS files NoiseChisel's output}),
NoiseChisel's output is a multi-extension FITS file with several images the
same size as the input.
+As the input datasets get larger this output can become hard to manage and
waste a lot of storage space.
+Fortunately there is a solution to this problem (which is also useful for
Segment's outputs).
+But first, let's have a look at the volume of NoiseChisel's output from
@ref{NoiseChisel optimization for detection} (fast answer, its larger than 100
mega-bytes):
@example
$ ls -lh nc/xdf-f160w.fits
@end example
-Two options can drastically decrease NoiseChisel's output file size: 1)
-With the @option{--rawoutput} option, NoiseChisel won't create a
-Sky-subtracted input. After all, it is redundant: you can always generate
-it by subtracting the Sky from the input image (which you have in your
-database) using the Arithmetic program. 2) With the
-@option{--oneelempertile}, you can tell NoiseChisel to store its Sky and
-Sky standard deviation results with one pixel per tile (instead of many
-pixels per tile).
+Two options can drastically decrease NoiseChisel's output file size: 1) With
the @option{--rawoutput} option, NoiseChisel won't create a Sky-subtracted
input.
+After all, it is redundant: you can always generate it by subtracting the Sky
from the input image (which you have in your database) using the Arithmetic
program.
+2) With the @option{--oneelempertile}, you can tell NoiseChisel to store its
Sky and Sky standard deviation results with one pixel per tile (instead of many
pixels per tile).
@example
$ astnoisechisel flat-ir/xdf-f160w.fits --oneelempertile --rawoutput
@end example
@noindent
-The output is now just under 8 mega byes! But you can even be more
-efficient in space by compressing it. Try the command below to see how
-NoiseChisel's output has now shrunk to about 250 kilobyes while keeping all
-the necessary information as the original 100 mega-byte output.
+The output is now just under 8 mega byes! But you can even be more efficient
in space by compressing it.
+Try the command below to see how NoiseChisel's output has now shrunk to about
250 kilo-byes while keeping all the necessary information as the original 100
mega-byte output.
@example
$ gzip --best xdf-f160w_detected.fits
$ ls -lh xdf-f160w_detected.fits.gz
@end example
-We can get this wonderful level of compression because NoiseChisel's output
-is binary with only two values: 0 and 1. Compression algorithms are highly
-optimized in such scenarios.
+We can get this wonderful level of compression because NoiseChisel's output is
binary with only two values: 0 and 1.
+Compression algorithms are highly optimized in such scenarios.
-You can open @file{xdf-f160w_detected.fits.gz} directly in SAO DS9 or feed
-it to any of Gnuastro's programs without having to uncompress
-it. Higher-level programs that take NoiseChisel's output can also deal with
-this compressed image where the Sky and its Standard deviation are one
-pixel-per-tile.
+You can open @file{xdf-f160w_detected.fits.gz} directly in SAO DS9 or feed it
to any of Gnuastro's programs without having to uncompress it.
+Higher-level programs that take NoiseChisel's output can also deal with this
compressed image where the Sky and its Standard deviation are one
pixel-per-tile.
@node Segmentation and making a catalog, Working with catalogs estimating
colors, NoiseChisel optimization for storage, General program usage tutorial
@subsection Segmentation and making a catalog
-The main output of NoiseChisel is the binary detection map
-(@code{DETECTIONS} extension, see @ref{NoiseChisel optimization for
-detection}). which only has two values of 1 or 0. This is useful when
-studying the noise, but hardly of any use when you actually want to study
-the targets/galaxies in the image, especially in such a deep field where
-the detection map of almost everything is connected. To find the galaxies
-over the detections, we'll use Gnuastro's @ref{Segment} program:
+The main output of NoiseChisel is the binary detection map (@code{DETECTIONS}
extension, see @ref{NoiseChisel optimization for detection}).
+which only has two values of 1 or 0.
+This is useful when studying the noise, but hardly of any use when you
actually want to study the targets/galaxies in the image, especially in such a
deep field where the detection map of almost everything is connected.
+To find the galaxies over the detections, we'll use Gnuastro's @ref{Segment}
program:
@example
$ mkdir seg
$ astsegment nc/xdf-f160w.fits -oseg/xdf-f160w.fits
@end example
-Segment's operation is very much like NoiseChisel (in fact, prior to
-version 0.6, it was part of NoiseChisel). For example the output is a
-multi-extension FITS file, it has check images and uses the undetected
-regions as a reference. Please have a look at Segment's multi-extension
-output with @command{ds9} to get a good feeling of what it has done.
+Segment's operation is very much like NoiseChisel (in fact, prior to version
0.6, it was part of NoiseChisel).
+For example the output is a multi-extension FITS file, it has check images and
uses the undetected regions as a reference.
+Please have a look at Segment's multi-extension output with @command{ds9} to
get a good feeling of what it has done.
@example
$ ds9 -mecube seg/xdf-f160w.fits -zscale -zoom to fit
@end example
-Like NoiseChisel, the first extension is the input. The @code{CLUMPS}
-extension shows the true ``clumps'' with values that are @mymath{\ge1}, and
-the diffuse regions labeled as @mymath{-1}. In the @code{OBJECTS}
-extension, we see that the large detections of NoiseChisel (that may have
-contained many galaxies) are now broken up into separate labels. see
-@ref{Segment} for more.
+Like NoiseChisel, the first extension is the input.
+The @code{CLUMPS} extension shows the true ``clumps'' with values that are
@mymath{\ge1}, and the diffuse regions labeled as @mymath{-1}.
+In the @code{OBJECTS} extension, we see that the large detections of
NoiseChisel (that may have contained many galaxies) are now broken up into
separate labels.
+See @ref{Segment} for more.
-Having localized the regions of interest in the dataset, we are ready to do
-measurements on them with @ref{MakeCatalog}. Besides the IDs, we want to
-measure (in this order) the Right Ascension (with @option{--ra}),
-Declination (@option{--dec}), magnitude (@option{--magnitude}), and
-signal-to-noise ratio (@option{--sn}) of the objects and clumps. The
-following command will make these measurements on Segment's F160W output:
+Having localized the regions of interest in the dataset, we are ready to do
measurements on them with @ref{MakeCatalog}.
+Besides the IDs, we want to measure (in this order) the Right Ascension (with
@option{--ra}), Declination (@option{--dec}), magnitude (@option{--magnitude}),
and signal-to-noise ratio (@option{--sn}) of the objects and clumps.
+The following command will make these measurements on Segment's F160W output:
@c Keep the `--zeropoint' on a single line, because later, we'll add
@c `--valuesfile' in that line also, and it would be more clear if both
@@ -3781,30 +2937,18 @@ $ astmkcatalog seg/xdf-f160w.fits --ids --ra --dec
--magnitude --sn \
@end example
@noindent
-From the printed statements on the command-line, you see that MakeCatalog
-read all the extensions in Segment's output for the various measurements it
-needed.
+From the printed statements on the command-line, you see that MakeCatalog read
all the extensions in Segment's output for the various measurements it needed.
-To calculate colors, we also need magnitude measurements on the F105W
-filter. However, the galaxy properties might differ between the filters
-(which is the whole purpose behind measuring colors). Also, the noise
-properties and depth of the datasets differ. Therefore, if we simply follow
-the same Segment and MakeCatalog calls above for the F105W filter, we are
-going to get a different number of objects and clumps. Matching the two
-catalogs is possible (for example with @ref{Match}), but the fact that the
-measurements will be done on different pixels, can bias the result. Since
-the Point spread function (PSF) of both images is very similar, an accurate
-color calculation can only be done when magnitudes are measured from the
-same pixels on both images.
+To calculate colors, we also need magnitude measurements on the F105W filter.
+However, the galaxy properties might differ between the filters (which is the
whole purpose behind measuring colors).
+Also, the noise properties and depth of the datasets differ.
+Therefore, if we simply follow the same Segment and MakeCatalog calls above
for the F105W filter, we are going to get a different number of objects and
clumps.
+Matching the two catalogs is possible (for example with @ref{Match}), but the
fact that the measurements will be done on different pixels, can bias the
result.
+Since the Point spread function (PSF) of both images is very similar, an
accurate color calculation can only be done when magnitudes are measured from
the same pixels on both images.
-The F160W image is deeper, thus providing better detection/segmentation,
-and redder, thus observing smaller/older stars and representing more of the
-mass in the galaxies. To generate the F105W catalog, we will thus use the
-pixel labels generated on the F160W filter, but do the measurements on the
-F105W filter (using MakeCatalog's @option{--valuesfile} option). Notice how
-the only difference between this call to MakeCatalog and the previous one
-is @option{--valuesfile}, the value given to @code{--zeropoint} and the
-output name.
+The F160W image is deeper, thus providing better detection/segmentation, and
redder, thus observing smaller/older stars and representing more of the mass in
the galaxies.
+To generate the F105W catalog, we will thus use the pixel labels generated on
the F160W filter, but do the measurements on the F105W filter (using
MakeCatalog's @option{--valuesfile} option).
+Notice how the only difference between this call to MakeCatalog and the
previous one is @option{--valuesfile}, the value given to @code{--zeropoint}
and the output name.
@example
$ astmkcatalog seg/xdf-f160w.fits --ids --ra --dec --magnitude --sn \
@@ -3812,47 +2956,35 @@ $ astmkcatalog seg/xdf-f160w.fits --ids --ra --dec
--magnitude --sn \
--clumpscat --output=cat/xdf-f105w.fits
@end example
-Look into what MakeCatalog printed on the command-line. You can see that
-(as requested) the object and clump labels were taken from the respective
-extensions in @file{seg/xdf-f160w.fits}, while the values and Sky standard
-deviation were done on @file{nc/xdf-f105w.fits}.
+Look into what MakeCatalog printed on the command-line.
+You can see that (as requested) the object and clump labels were taken from
the respective extensions in @file{seg/xdf-f160w.fits}, while the values and
Sky standard deviation were done on @file{nc/xdf-f105w.fits}.
-Since we used the same labeled image on both filters, the number of rows in
-both catalogs are the same. The clumps are not affected by the
-hard-to-deblend and low signal-to-noise diffuse regions, they are more
-robust for calculating the colors (compared to objects). Therefore from
-this step onward, we'll continue with clumps.
+Since we used the same labeled image on both filters, the number of rows in
both catalogs are the same.
+The clumps are not affected by the hard-to-deblend and low signal-to-noise
diffuse regions, they are more robust for calculating the colors (compared to
objects).
+Therefore from this step onward, we'll continue with clumps.
-Finally, the comments in MakeCatalog's output (@code{COMMENT} keywords in
-the FITS headers, or lines starting with @code{#} in plain text) contain
-some important information about the input datasets and other useful info
-(for example pixel area or per-pixel surface brightness limit). You can see
-them with this command:
+Finally, the comments in MakeCatalog's output (@code{COMMENT} keywords in the
FITS headers, or lines starting with @code{#} in plain text) contain some
important information about the input datasets and other useful info (for
example pixel area or per-pixel surface brightness limit).
+You can see them with this command:
@example
$ astfits cat/xdf-f160w.fits -h1 | grep COMMENT
@end example
-@node Working with catalogs estimating colors, Aperture photomery,
Segmentation and making a catalog, General program usage tutorial
+@node Working with catalogs estimating colors, Aperture photometry,
Segmentation and making a catalog, General program usage tutorial
@subsection Working with catalogs (estimating colors)
-The output of the MakeCatalog command above is a FITS table (see
-@ref{Segmentation and making a catalog}). The two clump and object catalogs
-are available in the two extensions of the single FITS
-file@footnote{MakeCatalog can also output plain text tables. However, in
-the plain text format you can only have one table per file. Therefore, if
-you also request measurements on clumps, two plain text tables will be
-created (suffixed with @file{_o.txt} and @file{_c.txt}).}. Let's see the
-extensions and their basic properties with the Fits program:
+The output of the MakeCatalog command above is a FITS table (see
@ref{Segmentation and making a catalog}).
+The two clump and object catalogs are available in the two extensions of the
single FITS file@footnote{MakeCatalog can also output plain text tables.
+However, in the plain text format you can only have one table per file.
+Therefore, if you also request measurements on clumps, two plain text tables
will be created (suffixed with @file{_o.txt} and @file{_c.txt}).}.
+Let's see the extensions and their basic properties with the Fits program:
@example
$ astfits cat/xdf-f160w.fits # Extension information
@end example
-Now, let's inspect the table in each extension with Gnuastro's Table
-program (see @ref{Table}). Note that we could have used @option{-hOBJECTS}
-and @option{-hCLUMPS} instead of @option{-h1} and @option{-h2}
-respectively.
+Now, let's inspect the table in each extension with Gnuastro's Table program
(see @ref{Table}).
+Note that we could have used @option{-hOBJECTS} and @option{-hCLUMPS} instead
of @option{-h1} and @option{-h2} respectively.
@example
$ asttable cat/xdf-f160w.fits -h1 --info # Objects catalog info.
@@ -3861,16 +2993,11 @@ $ asttable cat/xdf-f160w.fits -h2 -i # Clumps
catalog info.
$ asttable cat/xdf-f160w.fits -h2 # Clumps catalog columns.
@end example
-As you see above, when given a specific table (file name and extension),
-Table will print the full contents of all the columns. To see the basic
-metadata about each column (for example name, units and comments), simply
-append a @option{--info} (or @option{-i}) to the command.
+As you see above, when given a specific table (file name and extension), Table
will print the full contents of all the columns.
+To see the basic metadata about each column (for example name, units and
comments), simply append a @option{--info} (or @option{-i}) to the command.
-To print the contents of special column(s), just specify the column
-number(s) (counting from @code{1}) or the column name(s) (if they have
-one). For example, if you just want the magnitude and signal-to-noise ratio
-of the clumps (in @option{-h2}), you can get it with any of the following
-commands
+To print the contents of special column(s), just specify the column number(s)
(counting from @code{1}) or the column name(s) (if they have one).
+For example, if you just want the magnitude and signal-to-noise ratio of the
clumps (in @option{-h2}), you can get it with any of the following commands
@example
$ asttable cat/xdf-f160w.fits -h2 -c5,6
@@ -3879,41 +3006,25 @@ $ asttable cat/xdf-f160w.fits -h2 -c5 -c6
$ asttable cat/xdf-f160w.fits -h2 -cMAGNITUDE -cSN
@end example
-Using column names instead of numbers has many advantages: 1) you don't
-have to worry about the order of columns in the table. 2) It acts as a
-documentation in the script. Column meta-data (including a name) aren't
-just limited to FITS tables and can also be used in plain text tables, see
-@ref{Gnuastro text table format}.
-
-We can finally calculate the colors of the objects from these two
-datasets. If you inspect the contents of the two catalogs, you'll notice
-that because they were both derived from the same segmentation maps, the
-rows are ordered identically (they correspond to the same object/clump in
-both filters). But to be generic (usable even when the rows aren't ordered
-similarly) and display another useful program in Gnuastro, we'll use
-@ref{Match}.
-
-As the name suggests, Gnuastro's Match program will match rows based on
-distance (or aperture in 2D) in one (or two) columns. In the command below,
-the options relating to each catalog are placed under it for easy
-understanding. You give Match two catalogs (from the two different filters
-we derived above) as argument, and the HDUs containing them (if they are
-FITS files) with the @option{--hdu} and @option{--hdu2} options. The
-@option{--ccol1} and @option{--ccol2} options specify the
-coordinate-columns which should be matched with which in the two
-catalogs. With @option{--aperture} you specify the acceptable error (radius
-in 2D), in the same units as the columns (see below for why we have
-requested an aperture of 0.35 arcseconds, or less than 6 HST pixels).
-
-The @option{--outcols} of Match is a very convenient feature in Match: you
-can use it to specify which columns from the two catalogs you want in the
-output (merge two input catalogs into one). If the first character is an
-`@key{a}', the respective matched column (number or name, similar to Table
-above) in the first catalog will be written in the output table. When the
-first character is a `@key{b}', the respective column from the second
-catalog will be written in the output. Also, if the first character is
-followed by @code{_all}, then all the columns from the respective catalog
-will be put in the output.
+Using column names instead of numbers has many advantages:
+1) you don't have to worry about the order of columns in the table.
+2) It acts as a documentation in the script.
+Column meta-data (including a name) aren't just limited to FITS tables and can
also be used in plain text tables, see @ref{Gnuastro text table format}.
+
+We can finally calculate the colors of the objects from these two datasets.
+If you inspect the contents of the two catalogs, you'll notice that because
they were both derived from the same segmentation maps, the rows are ordered
identically (they correspond to the same object/clump in both filters).
+But to be generic (usable even when the rows aren't ordered similarly) and
display another useful program in Gnuastro, we'll use @ref{Match}.
+
+As the name suggests, Gnuastro's Match program will match rows based on
distance (or aperture in 2D) in one (or two) columns.
+In the command below, the options relating to each catalog are placed under it
for easy understanding.
+You give Match two catalogs (from the two different filters we derived above)
as argument, and the HDUs containing them (if they are FITS files) with the
@option{--hdu} and @option{--hdu2} options.
+The @option{--ccol1} and @option{--ccol2} options specify the
coordinate-columns which should be matched with which in the two catalogs.
+With @option{--aperture} you specify the acceptable error (radius in 2D), in
the same units as the columns (see below for why we have requested an aperture
of 0.35 arcseconds, or less than 6 HST pixels).
+
+The @option{--outcols} of Match is a very convenient feature in Match: you can
use it to specify which columns from the two catalogs you want in the output
(merge two input catalogs into one).
+If the first character is an `@key{a}', the respective matched column (number
or name, similar to Table above) in the first catalog will be written in the
output table.
+When the first character is a `@key{b}', the respective column from the second
catalog will be written in the output.
+Also, if the first character is followed by @code{_all}, then all the columns
from the respective catalog will be put in the output.
@example
$ astmatch cat/xdf-f160w.fits cat/xdf-f105w.fits \
@@ -3930,15 +3041,12 @@ Let's have a look at the columns in the matched catalog:
$ asttable cat/xdf-f160w-f105w.fits -i
@end example
-Indeed, its exactly the columns we wanted: there are two @code{MAGNITUDE}
-and @code{SN} columns. The first is from the F160W filter, the second is
-from the F105W. Right now, you know this. But in one hour, you'll start
-doubting your self: going through your command history, trying to answer
-this question: ``which magnitude corresponds to which filter?''. You should
-never torture your future-self (or colleagues) like this! So, let's rename
-these confusing columns in the matched catalog. The FITS standard for
-tables stores the column names in the @code{TTYPE} header keywords, so
-let's have a look:
+Indeed, its exactly the columns we wanted: there are two @code{MAGNITUDE} and
@code{SN} columns.
+The first is from the F160W filter, the second is from the F105W.
+Right now, you know this.
+But in one hour, you'll start doubting your self: going through your command
history, trying to answer this question: ``which magnitude corresponds to which
filter?''.
+You should never torture your future-self (or colleagues) like this! So, let's
rename these confusing columns in the matched catalog.
+The FITS standard for tables stores the column names in the @code{TTYPE}
header keywords, so let's have a look:
@example
$ astfits cat/xdf-f160w-f105w.fits -h1 | grep TTYPE
@@ -3955,86 +3063,48 @@ $ astfits cat/xdf-f160w-f105w.fits -h1
\
$ asttable cat/xdf-f160w-f105w.fits -i
@end example
-If you noticed, when running Match, we also asked for a log file
-(@option{--log}). Many Gnuastro programs have this option to provide some
-detailed information on their operation in case you are curious or want to
-debug something. Here, we are using it to justify the value we gave to
-@option{--aperture}. Even though you asked for the output to be written in
-the @file{cat} directory, a listing of the contents of your current
-directory will show you an extra @file{astmatch.fits} file. Let's have a
-look at what columns it contains.
+If you noticed, when running Match, we also asked for a log file
(@option{--log}).
+Many Gnuastro programs have this option to provide some detailed information
on their operation in case you are curious or want to debug something.
+Here, we are using it to justify the value we gave to @option{--aperture}.
+Even though you asked for the output to be written in the @file{cat}
directory, a listing of the contents of your current directory will show you an
extra @file{astmatch.fits} file.
+Let's have a look at what columns it contains.
@example
$ ls
-$ asttable astmatch.log -i
+$ asttable astmatch.fits -i
@end example
-@c********************************
-@c We'll merge them into one table using the @command{paste} program
-@c on the command-line. But, we only want the magnitude from the F105W
-@c dataset, so we'll only pull out the @code{MAGNITUDE} and @code{SN}
-@c column. The output of @command{paste} will have each line of both catalogs
-@c merged into a single line.
-
-@c @example
-@c $ asttable cat/xdf-f160w.fits -h2 > xdf-f160w.txt
-@c $ asttable cat/xdf-f105w.fits -h2 -cMAGNITUDE,SN > xdf-f105w.txt
-@c $ paste xdf-f160w.txt xdf-f105w.txt > xdf-f160w-f105w.txt
-@c @end example
-
-@c Open @file{xdf-f160w-f105w.txt} to see how @command{paste} has operated.
-@c ********************************
-
@cindex Flux-weighted
@cindex SED, Spectral Energy Distribution
@cindex Spectral Energy Distribution, SED
-The @file{MATCH_DIST} column contains the distance of the matched rows,
-let's have a look at the distribution of values in this column. You might
-be asking yourself ``why should the positions of the two filters differ
-when I gave MakeCatalog the same segmentation map?'' The reason is that the
-central positions are @emph{flux-weighted}. Therefore the
-@option{--valuesfile} dataset you give to MakeCatalog will also affect the
-center measurements@footnote{To only measure the center based on the
-labeled pixels (and ignore the pixel values), you can ask for the columns
-that contain @option{geo} (for geometric) in them. For example
-@option{--geow1} or @option{--geow2} for the RA and Declination (first and
-second world-coordinates).}. Recall that the Spectral Energy Distribution
-(SED) of galaxies is not flat and they have substructure, therefore, they
-can have different shapes/morphologies in different filters.
-
-Gnuastro has a simple program for basic statistical analysis. The command
-below will print some basic information about the distribution (minimum,
-maximum, median and etc), along with a cute little ASCII histogram to
-visually help you understand the distribution on the command-line without
-the need for a graphic user interface. This ASCII histogram can be useful
-when you just want some coarse and general information on the input
-dataset. It is also useful when working on a server (where you may not have
-graphic user interface), and finally, its fast.
+The @file{MATCH_DIST} column contains the distance of the matched rows, let's
have a look at the distribution of values in this column.
+You might be asking yourself ``why should the positions of the two filters
differ when I gave MakeCatalog the same segmentation map?'' The reason is that
the central positions are @emph{flux-weighted}.
+Therefore the @option{--valuesfile} dataset you give to MakeCatalog will also
affect the center measurements@footnote{To only measure the center based on the
labeled pixels (and ignore the pixel values), you can ask for the columns that
contain @option{geo} (for geometric) in them.
+For example @option{--geow1} or @option{--geow2} for the RA and Declination
(first and second world-coordinates).}.
+Recall that the Spectral Energy Distribution (SED) of galaxies is not flat and
they have substructure, therefore, they can have different shapes/morphologies
in different filters.
+
+Gnuastro has a simple program for basic statistical analysis.
+The command below will print some basic information about the distribution
(minimum, maximum, median, etc), along with a cute little ASCII histogram to
visually help you understand the distribution on the command-line without the
need for a graphic user interface.
+This ASCII histogram can be useful when you just want some coarse and general
information on the input dataset.
+It is also useful when working on a server (where you may not have graphic
user interface), and finally, its fast.
@example
$ aststatistics astmatch.fits -cMATCH_DIST
$ rm astmatch.fits
@end example
-The units of this column are the same as the columns you gave to Match: in
-degrees. You see that while almost all the objects matched very nicely, the
-maximum distance is roughly 0.31 arcseconds. This is why we asked for an
-aperture of 0.35 arcseconds when doing the match.
+The units of this column are the same as the columns you gave to Match: in
degrees.
+You see that while almost all the objects matched very nicely, the maximum
distance is roughly 0.31 arcseconds.
+This is why we asked for an aperture of 0.35 arcseconds when doing the match.
-Gnuastro's Table program can also be used to measure the colors using the
-command below. As before, the @option{-c1,2} option will tell Table to
-print the first two columns. With the @option{--range=SN_F160W,7,inf} we
-only keep the rows that have a F160W signal-to-noise ratio larger than
-7@footnote{The value of 7 is taken from the clump S/N threshold in F160W
-(where the clumps were defined).}.
+Gnuastro's Table program can also be used to measure the colors using the
command below.
+As before, the @option{-c1,2} option will tell Table to print the first two
columns.
+With the @option{--range=SN_F160W,7,inf} we only keep the rows that have a
F160W signal-to-noise ratio larger than 7@footnote{The value of 7 is taken from
the clump S/N threshold in F160W (where the clumps were defined).}.
-Finally, for estimating the colors, we use Table's column arithmetic
-feature. It uses the same notation as the Arithmetic program (see
-@ref{Reverse polish notation}), with almost all the same operators (see
-@ref{Arithmetic operators}). You can use column arithmetic in any output
-column, just put the value in double quotations and start the value with
-@code{arith} (followed by a space) like below. In column-arithmetic, you
-can identify columns by number or name, see @ref{Column arithmetic}.
+Finally, for estimating the colors, we use Table's column arithmetic feature.
+It uses the same notation as the Arithmetic program (see @ref{Reverse polish
notation}), with almost all the same operators (see @ref{Arithmetic operators}).
+You can use column arithmetic in any output column, just put the value in
double quotations and start the value with @code{arith} (followed by a space)
like below.
+In column-arithmetic, you can identify columns by number or name, see
@ref{Column arithmetic}.
@example
$ asttable cat/xdf-f160w-f105w.fits -ocat/f105w-f160w.fits \
@@ -4043,20 +3113,16 @@ $ asttable cat/xdf-f160w-f105w.fits
-ocat/f105w-f160w.fits \
@end example
@noindent
-You can inspect the distribution of colors with the Statistics program. But
-first, let's give the color column a proper name.
+You can inspect the distribution of colors with the Statistics program.
+But first, let's give the color column a proper name.
@example
$ astfits cat/f105w-f160w.fits --update=TTYPE5,COLOR_F105W_F160W
$ aststatistics cat/f105w-f160w.fits -cCOLOR_F105W_F160W
@end example
-You can later use Gnuastro's Statistics program with the
-@option{--histogram} option to build a much more fine-grained histogram as
-a table to feed into your favorite plotting program for a much more
-accurate/appealing plot (for example with PGFPlots in @LaTeX{}). If you
-just want a specific measure, for example the mean, median and standard
-deviation, you can ask for them specifically with this command:
+You can later use Gnuastro's Statistics program with the @option{--histogram}
option to build a much more fine-grained histogram as a table to feed into your
favorite plotting program for a much more accurate/appealing plot (for example
with PGFPlots in @LaTeX{}).
+If you just want a specific measure, for example the mean, median and standard
deviation, you can ask for them specifically with this command:
@example
$ aststatistics cat/f105w-f160w.fits -cCOLOR_F105W_F160W \
@@ -4064,22 +3130,17 @@ $ aststatistics cat/f105w-f160w.fits
-cCOLOR_F105W_F160W \
@end example
-@node Aperture photomery, Finding reddest clumps and visual inspection,
Working with catalogs estimating colors, General program usage tutorial
-@subsection Aperture photomery
-Some researchers prefer to have colors in a fixed aperture for all the
-objects. The colors we calculated in @ref{Working with catalogs estimating
-colors} used a different segmentation map for each object. This might not
-satisfy some science cases. To make a catalog from fixed apertures, we
-should make a labeled image which has a fixed label for each aperture. That
-labeled image can be given to MakeCatalog instead of Segment's labeled
-detection image.
+@node Aperture photometry, Finding reddest clumps and visual inspection,
Working with catalogs estimating colors, General program usage tutorial
+@subsection Aperture photometry
+Some researchers prefer to have colors in a fixed aperture for all the objects.
+The colors we calculated in @ref{Working with catalogs estimating colors} used
a different segmentation map for each object.
+This might not satisfy some science cases.
+To make a catalog from fixed apertures, we should make a labeled image which
has a fixed label for each aperture.
+That labeled image can be given to MakeCatalog instead of Segment's labeled
detection image.
@cindex GNU AWK
-To generate the apertures catalog we'll use Gnuastro's MakeProfiles (see
-@ref{MakeProfiles}). We'll first read the clump positions from the F160W
-catalog, then use AWK to set the other parameters of each profile to be a
-fixed circle of radius 5 pixels (recall that we want all apertures to be
-identical in this scenario).
+To generate the apertures catalog we'll use Gnuastro's MakeProfiles (see
@ref{MakeProfiles}).
+We'll first read the clump positions from the F160W catalog, then use AWK to
set the other parameters of each profile to be a fixed circle of radius 5
pixels (recall that we want all apertures to be identical in this scenario).
@example
$ rm *.fits *.txt
@@ -4088,15 +3149,10 @@ $ asttable cat/xdf-f160w.fits -hCLUMPS -cRA,DEC
\
> apertures.txt
@end example
-We can now feed this catalog into MakeProfiles using the command below to
-build the apertures over the image. The most important option for this
-particular job is @option{--mforflatpix}, it tells MakeProfiles that the
-values in the magnitude column should be used for each pixel of a flat
-profile. Without it, MakeProfiles would build the profiles such that the
-@emph{sum} of the pixels of each profile would have a @emph{magnitude} (in
-log-scale) of the value given in that column (what you would expect when
-simulating a galaxy for example). See @ref{Invoking astmkprof} for details
-on the options.
+We can now feed this catalog into MakeProfiles using the command below to
build the apertures over the image.
+The most important option for this particular job is @option{--mforflatpix},
it tells MakeProfiles that the values in the magnitude column should be used
for each pixel of a flat profile.
+Without it, MakeProfiles would build the profiles such that the @emph{sum} of
the pixels of each profile would have a @emph{magnitude} (in log-scale) of the
value given in that column (what you would expect when simulating a galaxy for
example).
+See @ref{Invoking astmkprof} for details on the options.
@example
$ astmkprof apertures.txt --background=flat-ir/xdf-f160w.fits \
@@ -4104,25 +3160,17 @@ $ astmkprof apertures.txt
--background=flat-ir/xdf-f160w.fits \
--mode=wcs
@end example
-The first thing you might notice in the printed information is that the
-profiles are not built in order. This is because MakeProfiles works in
-parallel, and parallel CPU operations are asynchronous. You can try running
-MakeProfiles with one thread (using @option{--numthreads=1}) to see how
-order is respected in that case.
+The first thing you might notice in the printed information is that the
profiles are not built in order.
+This is because MakeProfiles works in parallel, and parallel CPU operations
are asynchronous.
+You can try running MakeProfiles with one thread (using
@option{--numthreads=1}) to see how order is respected in that case.
-Open the output @file{apertures.fits} file and see the result. Where the
-apertures overlap, you will notice that one label has replaced the other
-(because of the @option{--replace} option). In the future, MakeCatalog will
-be able to work with overlapping labels, but currently it doesn't. If you
-are interested, please join us in completing Gnuastro with added
-improvements like this (see task 14750
-@footnote{@url{https://savannah.gnu.org/task/index.php?14750}}).
+Open the output @file{apertures.fits} file and see the result.
+Where the apertures overlap, you will notice that one label has replaced the
other (because of the @option{--replace} option).
+In the future, MakeCatalog will be able to work with overlapping labels, but
currently it doesn't.
+If you are interested, please join us in completing Gnuastro with added
improvements like this (see task 14750
@footnote{@url{https://savannah.gnu.org/task/index.php?14750}}).
-We can now feed the @file{apertures.fits} labeled image into MakeCatalog
-instead of Segment's output as shown below. In comparison with the previous
-MakeCatalog call, you will notice that there is no more
-@option{--clumpscat} option, since each aperture is treated as a separate
-``object'' here.
+We can now feed the @file{apertures.fits} labeled image into MakeCatalog
instead of Segment's output as shown below.
+In comparison with the previous MakeCatalog call, you will notice that there
is no more @option{--clumpscat} option, since each aperture is treated as a
separate ``object'' here.
@example
$ astmkcatalog apertures.fits -h1 --zeropoint=26.27 \
@@ -4131,36 +3179,28 @@ $ astmkcatalog apertures.fits -h1 --zeropoint=26.27
\
--output=cat/xdf-f105w-aper.fits
@end example
-This catalog has the same number of rows as the catalog produced from
-clumps in @ref{Working with catalogs estimating colors}. Therefore similar
-to how we found colors, you can compare the aperture and clump magnitudes
-for example.
+This catalog has the same number of rows as the catalog produced from clumps
in @ref{Working with catalogs estimating colors}.
+Therefore similar to how we found colors, you can compare the aperture and
clump magnitudes for example.
You can also change the filter name and zeropoint magnitudes and run this
command again to have the fixed aperture magnitude in the F160W filter and
measure colors on apertures.
-@node Finding reddest clumps and visual inspection, Citing and acknowledging
Gnuastro, Aperture photomery, General program usage tutorial
+@node Finding reddest clumps and visual inspection, Citing and acknowledging
Gnuastro, Aperture photometry, General program usage tutorial
@subsection Finding reddest clumps and visual inspection
@cindex GNU AWK
-As a final step, let's go back to the original clumps-based color
-measurement we generated in @ref{Working with catalogs estimating
-colors}. We'll find the objects with the strongest color and make a cutout
-to inspect them visually and finally, we'll see how they are located on the
-image. With the command below, we'll select the reddest objects (those with
-a color larger than 1.5):
+As a final step, let's go back to the original clumps-based color measurement
we generated in @ref{Working with catalogs estimating colors}.
+We'll find the objects with the strongest color and make a cutout to inspect
them visually and finally, we'll see how they are located on the image.
+With the command below, we'll select the reddest objects (those with a color
larger than 1.5):
@example
$ asttable cat/f105w-f160w.fits --range=COLOR_F105W_F160W,1.5,inf
@end example
-We want to crop the F160W image around each of these objects, but we need a
-unique identifier for them first. We'll define this identifier using the
-object and clump labels (with an underscore between them) and feed the
-output of the command above to AWK to generate a catalog. Note that since
-we are making a plain text table, we'll define the column metadata manually
-(see @ref{Gnuastro text table format}).
+We want to crop the F160W image around each of these objects, but we need a
unique identifier for them first.
+We'll define this identifier using the object and clump labels (with an
underscore between them) and feed the output of the command above to AWK to
generate a catalog.
+Note that since we are making a plain text table, we'll define the column
metadata manually (see @ref{Gnuastro text table format}).
@example
$ echo "# Column 1: ID [name, str10] Object ID" > reddest.txt
@@ -4169,12 +3209,10 @@ $ asttable cat/f105w-f160w.fits
--range=COLOR_F105W_F160W,1.5,inf \
>> reddest.txt
@end example
-We can now feed @file{reddest.txt} into Gnuastro's Crop program to see what
-these objects look like. To keep things clean, we'll make a directory
-called @file{crop-red} and ask Crop to save the crops in this
-directory. We'll also add a @file{-f160w.fits} suffix to the crops (to
-remind us which image they came from). The width of the crops will be 15
-arcseconds.
+We can now feed @file{reddest.txt} into Gnuastro's Crop program to see what
these objects look like.
+To keep things clean, we'll make a directory called @file{crop-red} and ask
Crop to save the crops in this directory.
+We'll also add a @file{-f160w.fits} suffix to the crops (to remind us which
image they came from).
+The width of the crops will be 15 arcseconds.
@example
$ mkdir crop-red
@@ -4183,16 +3221,12 @@ $ astcrop flat-ir/xdf-f160w.fits --mode=wcs
--namecol=ID \
--suffix=-f160w.fits --output=crop-red
@end example
-You can see all the cropped FITS files in the @file{crop-red}
-directory. Like the MakeProfiles command in @ref{Aperture photomery}, you
-might notice that the crops aren't made in order. This is because each crop
-is independent of the rest, therefore crops are done in parallel, and
-parallel operations are asynchronous. In the command above, you can change
-@file{f160w} to @file{f105w} to make the crops in both filters.
+You can see all the cropped FITS files in the @file{crop-red} directory.
+Like the MakeProfiles command in @ref{Aperture photometry}, you might notice
that the crops aren't made in order.
+This is because each crop is independent of the rest, therefore crops are done
in parallel, and parallel operations are asynchronous.
+In the command above, you can change @file{f160w} to @file{f105w} to make the
crops in both filters.
-To view the crops more easily (not having to open ds9 for each image), you
-can convert the FITS crops into the JPEG format with a shell loop like
-below.
+To view the crops more easily (not having to open ds9 for each image), you can
convert the FITS crops into the JPEG format with a shell loop like below.
@example
$ cd crop-red
@@ -4202,18 +3236,14 @@ $ for f in *.fits; do
\
$ cd ..
@end example
-You can now use your general graphic user interface image viewer to flip
-through the images more easily, or import them into your papers/reports.
+You can now use your general graphic user interface image viewer to flip
through the images more easily, or import them into your papers/reports.
@cindex GNU Parallel
-The @code{for} loop above to convert the images will do the job in series:
-each file is converted only after the previous one is complete. If you have
-@url{https://www.gnu.org/s/parallel, GNU Parallel}, you can greatly speed
-up this conversion. GNU Parallel will run the separate commands
-simultaneously on different CPU threads in parallel. For more information
-on efficiently using your threads, see @ref{Multi-threaded
-operations}. Here is a replacement for the shell @code{for} loop above
-using GNU Parallel.
+The @code{for} loop above to convert the images will do the job in series:
each file is converted only after the previous one is complete.
+If you have @url{https://www.gnu.org/s/parallel, GNU Parallel}, you can
greatly speed up this conversion.
+GNU Parallel will run the separate commands simultaneously on different CPU
threads in parallel.
+For more information on efficiently using your threads, see
@ref{Multi-threaded operations}.
+Here is a replacement for the shell @code{for} loop above using GNU Parallel.
@example
$ cd crop-red
@@ -4224,10 +3254,10 @@ $ cd ..
@cindex DS9
@cindex SAO DS9
-As the final action, let's see how these objects are positioned over the
-dataset. DS9 has the ``Region''s concept for this purpose. You just have to
-convert your catalog into a ``region file'' to feed into DS9. To do that,
-you can use AWK again as shown below.
+As the final action, let's see how these objects are positioned over the
dataset.
+DS9 has the ``Region''s concept for this purpose.
+You just have to convert your catalog into a ``region file'' to feed into DS9.
+To do that, you can use AWK again as shown below.
@example
$ awk 'BEGIN@{print "# Region file format: DS9 version 4.1"; \
@@ -4237,10 +3267,8 @@ $ awk 'BEGIN@{print "# Region file format: DS9 version
4.1"; \
reddest.txt > reddest.reg
@end example
-This region file can be loaded into DS9 with its @option{-regions} option
-to display over any image (that has world coordinate system). In the
-example below, we'll open Segment's output and load the regions over all
-the extensions (to see the image and the respective clump):
+This region file can be loaded into DS9 with its @option{-regions} option to
display over any image (that has world coordinate system).
+In the example below, we'll open Segment's output and load the regions over
all the extensions (to see the image and the respective clump):
@example
$ ds9 -mecube seg/xdf-f160w.fits -zscale -zoom to fit \
@@ -4250,14 +3278,10 @@ $ ds9 -mecube seg/xdf-f160w.fits -zscale -zoom to fit
\
@node Citing and acknowledging Gnuastro, , Finding reddest clumps and visual
inspection, General program usage tutorial
@subsection Citing and acknowledging Gnuastro
-In conclusion, we hope this extended tutorial has been a good starting
-point to help in your exciting research. If this book or any of the
-programs in Gnuastro have been useful for your research, please cite the
-respective papers, and acknowledge the funding agencies that made all of
-this possible. All Gnuastro programs have a @option{--cite} option to
-facilitate the citation and acknowledgment. Just note that it may be
-necessary to cite additional papers for different programs, so please try
-it out on all the programs that you used, for example:
+In conclusion, we hope this extended tutorial has been a good starting point
to help in your exciting research.
+If this book or any of the programs in Gnuastro have been useful for your
research, please cite the respective papers, and acknowledge the funding
agencies that made all of this possible.
+All Gnuastro programs have a @option{--cite} option to facilitate the citation
and acknowledgment.
+Just note that it may be necessary to cite additional papers for different
programs, so please try it out on all the programs that you used, for example:
@example
$ astmkcatalog --cite
@@ -4274,67 +3298,44 @@ $ astnoisechisel --cite
@node Detecting large extended targets, , General program usage tutorial,
Tutorials
@section Detecting large extended targets
-The outer wings of large and extended objects can sink into the noise very
-gradually and can have a large variety of shapes (for example due to tidal
-interactions). Therefore separating the outer boundaries of the galaxies
-from the noise can be particularly tricky. Besides causing an
-under-estimation in the total estimated brightness of the target, failure
-to detect such faint wings will also cause a bias in the noise
-measurements, thereby hampering the accuracy of any measurement on the
-dataset. Therefore even if they don't constitute a significant fraction of
-the target's light, or aren't your primary target, these regions must not
-be ignored. In this tutorial, we'll walk you through the strategy of
-detecting such targets using @ref{NoiseChisel}.
+The outer wings of large and extended objects can sink into the noise very
gradually and can have a large variety of shapes (for example due to tidal
interactions).
+Therefore separating the outer boundaries of the galaxies from the noise can
be particularly tricky.
+Besides causing an under-estimation in the total estimated brightness of the
target, failure to detect such faint wings will also cause a bias in the noise
measurements, thereby hampering the accuracy of any measurement on the dataset.
+Therefore even if they don't constitute a significant fraction of the target's
light, or aren't your primary target, these regions must not be ignored.
+In this tutorial, we'll walk you through the strategy of detecting such
targets using @ref{NoiseChisel}.
@cartouche
@noindent
-@strong{Don't start with this tutorial:} If you haven't already completed
-@ref{General program usage tutorial}, we strongly recommend going through
-that tutorial before starting this one. Basic features like access to this
-book on the command-line, the configuration files of Gnuastro's programs,
-benefiting from the modular nature of the programs, viewing multi-extension
-FITS files, or using NoiseChisel's outputs are discussed in more detail
-there.
+@strong{Don't start with this tutorial:} If you haven't already completed
@ref{General program usage tutorial}, we strongly recommend going through that
tutorial before starting this one.
+Basic features like access to this book on the command-line, the configuration
files of Gnuastro's programs, benefiting from the modular nature of the
programs, viewing multi-extension FITS files, or using NoiseChisel's outputs
are discussed in more detail there.
@end cartouche
@cindex M51
@cindex NGC5195
@cindex SDSS, Sloan Digital Sky Survey
@cindex Sloan Digital Sky Survey, SDSS
-We'll try to detect the faint tidal wings of the beautiful M51
-group@footnote{@url{https://en.wikipedia.org/wiki/M51_Group}} in this
-tutorial. We'll use a dataset/image from the public
-@url{http://www.sdss.org/, Sloan Digital Sky Survey}, or SDSS. Due to its
-more peculiar low surface brightness structure/features, we'll focus on the
-dwarf companion galaxy of the group (or NGC 5195). To get the image, you
-can use SDSS's @url{https://dr12.sdss.org/fields, Simple field search}
-tool. As long as it is covered by the SDSS, you can find an image
-containing your desired target either by providing a standard name (if it
-has one), or its coordinates. To access the dataset we will use here, write
-@code{NGC5195} in the ``Object Name'' field and press ``Submit'' button.
+We'll try to detect the faint tidal wings of the beautiful M51
group@footnote{@url{https://en.wikipedia.org/wiki/M51_Group}} in this tutorial.
+We'll use a dataset/image from the public @url{http://www.sdss.org/, Sloan
Digital Sky Survey}, or SDSS.
+Due to its more peculiar low surface brightness structure/features, we'll
focus on the dwarf companion galaxy of the group (or NGC 5195).
+To get the image, you can use SDSS's @url{https://dr12.sdss.org/fields, Simple
field search} tool.
+As long as it is covered by the SDSS, you can find an image containing your
desired target either by providing a standard name (if it has one), or its
coordinates.
+To access the dataset we will use here, write @code{NGC5195} in the ``Object
Name'' field and press ``Submit'' button.
@cartouche
@noindent
-@strong{Type the example commands:} Try to type the example commands on
-your terminal and use the history feature of your command-line (by pressing
-the ``up'' button to retrieve previous commands). Don't simply copy and
-paste the commands shown here. This will help simulate future situations
-when you are processing your own datasets.
+@strong{Type the example commands:} Try to type the example commands on your
terminal and use the history feature of your command-line (by pressing the
``up'' button to retrieve previous commands).
+Don't simply copy and paste the commands shown here.
+This will help simulate future situations when you are processing your own
datasets.
@end cartouche
@cindex GNU Wget
-You can see the list of available filters under the color image. For this
-demonstration, we'll use the r-band filter image. By clicking on the
-``r-band FITS'' link, you can download the image. Alternatively, you can
-just run the following command to download it with GNU Wget@footnote{To
-make the command easier to view on screen or in a page, we have defined the
-top URL of the image as the @code{topurl} shell variable. You can just
-replace the value of this variable with @code{$topurl} in the
-@command{wget} command.}. To keep things clean, let's also put it in a
-directory called @file{ngc5195}. With the @option{-O} option, we are asking
-Wget to save the downloaded file with a more manageable name:
-@file{r.fits.bz2} (this is an r-band image of NGC 5195, which was the
-directory name).
+You can see the list of available filters under the color image.
+For this demonstration, we'll use the r-band filter image.
+By clicking on the ``r-band FITS'' link, you can download the image.
+Alternatively, you can just run the following command to download it with GNU
Wget@footnote{To make the command easier to view on screen or in a page, we
have defined the top URL of the image as the @code{topurl} shell variable.
+You can just replace the value of this variable with @code{$topurl} in the
@command{wget} command.}.
+To keep things clean, let's also put it in a directory called @file{ngc5195}.
+With the @option{-O} option, we are asking Wget to save the downloaded file
with a more manageable name: @file{r.fits.bz2} (this is an r-band image of NGC
5195, which was the directory name).
@example
$ mkdir ngc5195
@@ -4345,14 +3346,11 @@ $ wget
$topurl/301/3716/6/frame-r-003716-6-0117.fits.bz2 -Or.fits.bz2
@cindex Bzip2
@noindent
-This server keeps the files in a Bzip2 compressed file format. So we'll
-first decompress it with the following command. By convention, compression
-programs delete the original file (compressed when uncompressing, or
-uncompressed when compressing). To keep the original file, you can use the
-@option{--keep} or @option{-k} option which is available in most
-compression programs for this job. Here, we don't need the compressed file
-any more, so we'll just let @command{bunzip} delete it for us and keep the
-directory clean.
+This server keeps the files in a Bzip2 compressed file format.
+So we'll first decompress it with the following command.
+By convention, compression programs delete the original file (compressed when
uncompressing, or uncompressed when compressing).
+To keep the original file, you can use the @option{--keep} or @option{-k}
option which is available in most compression programs for this job.
+Here, we don't need the compressed file any more, so we'll just let
@command{bunzip} delete it for us and keep the directory clean.
@example
$ bunzip2 r.fits.bz2
@@ -4366,193 +3364,129 @@ $ bunzip2 r.fits.bz2
@node NoiseChisel optimization, Achieved surface brightness level, Detecting
large extended targets, Detecting large extended targets
@subsection NoiseChisel optimization
-In @ref{Detecting large extended targets} we downladed the single exposure
-SDSS image. Let's see how NoiseChisel operates on it with its default
-parameters:
+In @ref{Detecting large extended targets} we downloaded the single exposure
SDSS image.
+Let's see how NoiseChisel operates on it with its default parameters:
@example
$ astnoisechisel r.fits -h0
@end example
-As described in @ref{Multiextension FITS files NoiseChisel's output},
-NoiseChisel's default output is a multi-extension FITS file. Open the
-output @file{r_detected.fits} file and have a look at the extensions, the
-first extension is only meta-data and contains NoiseChisel's configuration
-parameters. The rest are the Sky-subtracted input, the detection map, Sky
-values and Sky standard deviation.
+As described in @ref{Multiextension FITS files NoiseChisel's output},
NoiseChisel's default output is a multi-extension FITS file.
+Open the output @file{r_detected.fits} file and have a look at the extensions,
the first extension is only meta-data and contains NoiseChisel's configuration
parameters.
+The rest are the Sky-subtracted input, the detection map, Sky values and Sky
standard deviation.
@example
$ ds9 -mecube r_detected.fits -zscale -zoom to fit
@end example
-Flipping through the extensions in a FITS viewer, you will see that the
-first image (Sky-subtracted image) looks reasonable: there are no major
-artifacts due to bad Sky subtraction compared to the input. The second
-extension also seems reasonable with a large detection map that covers the
-whole of NGC5195, but also extends beyond towards the bottom of the
-image.
+Flipping through the extensions in a FITS viewer, you will see that the first
image (Sky-subtracted image) looks reasonable: there are no major artifacts due
to bad Sky subtraction compared to the input.
+The second extension also seems reasonable with a large detection map that
covers the whole of NGC5195, but also extends beyond towards the bottom of the
image.
+
+Now try flipping between the @code{DETECTIONS} and @code{SKY} extensions.
+In the @code{SKY} extension, you'll notice that there is still significant
signal beyond the detected pixels.
+You can tell that this signal belongs to the galaxy because the far-right side
of the image is dark and the brighter tiles are surrounding the detected pixels.
+
+The fact that signal from the galaxy remains in the Sky dataset shows that you
haven't done a good detection.
+The @code{SKY} extension must not contain any light around the galaxy.
+Generally, any time your target is much larger than the tile size and the
signal is almost flat (like this case), this @emph{will} happen.
+Therefore, when there are large objects in the dataset, @strong{the best
place} to check the accuracy of your detection is the estimated Sky image.
-Now try fliping between the @code{DETECTIONS} and @code{SKY} extensions.
-In the @code{SKY} extension, you'll notice that there is still significant
-signal beyond the detected pixels. You can tell that this signal belongs to
-the galaxy because the far-right side of the image is dark and the brighter
-tiles are surrounding the detected pixels.
-
-The fact that signal from the galaxy remains in the Sky dataset shows that
-you haven't done a good detection. The @code{SKY} extension must not
-contain any light around the galaxy. Generally, any time your target is
-much larger than the tile size and the signal is almost flat (like this
-case), this @emph{will} happen. Therefore, when there are large objects in
-the dataset, @strong{the best place} to check the accuracy of your
-detection is the estimated Sky image.
-
-When dominated by the background, noise has a symmetric
-distribution. However, signal is not symmetric (we don't have negative
-signal). Therefore when non-constant signal is present in a noisy dataset,
-the distribution will be positively skewed. This skewness is a good measure
-of how much signal we have in the distribution. The skewness can be
-accurately measured by the difference in the mean and median: assuming no
-strong outliers, the more distant they are, the more skewed the dataset
-is. For more see @ref{Quantifying signal in a tile}.
-
-However, skewness is only a proxy for signal when the signal has structure
-(varies per pixel). Therefore, when it is approximately constant over a
-whole tile, or sub-set of the image, the signal's effect is just to shift
-the symmetric center of the noise distribution to the positive and there
-won't be any skewness (major difference between the mean and median). This
-positive@footnote{In processed images, where the Sky value can be
-over-estimated, this constant shift can be negative.} shift that preserves
-the symmetric distribution is the Sky value. When there is a gradient over
-the dataset, different tiles will have different constant
-shifts/Sky-values, for example see Figure 11 of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
-
-To get less scatter in measuring the mean and median (and thus better
-estimate the skewness), you will need a larger tile. So let's play with the
-tessellation a little to see how it affects the result. In Gnuastro, you
-can see the option values (@option{--tilesize} in this case) by adding the
-@option{-P} option to your last command. Try running NoiseChisel with
-@option{-P} to see its default tile size.
-
-You can clearly see that the default tile size is indeed much smaller than
-this (huge) galaxy and its tidal features. As a result, NoiseChisel was
-unable to identify the skewness within the tiles under the outer parts of
-M51 and NGC 5159 and the threshold has been over-estimated on those
-tiles. To see which tiles were used for estimating the quantile threshold
-(no skewness was measured), you can use NoiseChisel's
-@option{--checkqthresh} option:
+When dominated by the background, noise has a symmetric distribution.
+However, signal is not symmetric (we don't have negative signal).
+Therefore when non-constant signal is present in a noisy dataset, the
distribution will be positively skewed.
+This skewness is a good measure of how much signal we have in the distribution.
+The skewness can be accurately measured by the difference in the mean and
median: assuming no strong outliers, the more distant they are, the more skewed
the dataset is.
+For more see @ref{Quantifying signal in a tile}.
+
+However, skewness is only a proxy for signal when the signal has structure
(varies per pixel).
+Therefore, when it is approximately constant over a whole tile, or sub-set of
the image, the signal's effect is just to shift the symmetric center of the
noise distribution to the positive and there won't be any skewness (major
difference between the mean and median).
+This positive@footnote{In processed images, where the Sky value can be
over-estimated, this constant shift can be negative.} shift that preserves the
symmetric distribution is the Sky value.
+When there is a gradient over the dataset, different tiles will have different
constant shifts/Sky-values, for example see Figure 11 of
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
+
+To get less scatter in measuring the mean and median (and thus better estimate
the skewness), you will need a larger tile.
+So let's play with the tessellation a little to see how it affects the result.
+In Gnuastro, you can see the option values (@option{--tilesize} in this case)
by adding the @option{-P} option to your last command.
+Try running NoiseChisel with @option{-P} to see its default tile size.
+
+You can clearly see that the default tile size is indeed much smaller than
this (huge) galaxy and its tidal features.
+As a result, NoiseChisel was unable to identify the skewness within the tiles
under the outer parts of M51 and NGC 5159 and the threshold has been
over-estimated on those tiles.
+To see which tiles were used for estimating the quantile threshold (no
skewness was measured), you can use NoiseChisel's @option{--checkqthresh}
option:
@example
$ astnoisechisel r.fits -h0 --checkqthresh
@end example
-Notice how this option doesn't allow NoiseChisel to finish. NoiseChisel
-aborted after finding and applying the quantile thresholds. When you call
-any of NoiseChisel's @option{--check*} options, by default, it will abort
-as soon as all the check steps have been written in the check file (a
-multi-extension FITS file). This allows you to focus on the problem you
-wanted to check as soon as possible (you can disable this feature with the
-@option{--continueaftercheck} option).
-
-To optimize the threshold-related settings for this image, let's playing
-with this quantile threshold check image a little. Don't forget that
-``@emph{Good statistical analysis is not a purely routine matter, and
-generally calls for more than one pass through the computer}'' (Anscombe
-1973, see @ref{Science and its tools}). A good scientist must have a good
-understanding of her tools to make a meaningful analysis. So don't hesitate
-in playing with the default configuration and reviewing the manual when you
-have a new dataset in front of you. Robust data analysis is an art,
-therefore a good scientist must first be a good artist.
-
-The first extension of @file{r_qthresh.fits} (@code{CONVOLVED}) is the
-convolved input image where the threshold(s) is(are) defined and
-applied. For more on the effect of convolution and thresholding, see
-Sections 3.1.1 and 3.1.2 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi
-and Ichikawa [2015]}. The second extension (@code{QTHRESH_ERODE}) has a
-blank value for all the pixels of any tile that was identified as having
-significant signal. The next two extensions (@code{QTHRESH_NOERODE} and
-@code{QTHRESH_EXPAND}) are the other two quantile thresholds that are
-necessary in NoiseChisel's later steps. Every step in this file is repeated
-on the three thresholds.
-
-Play a little with the color bar of the @code{QTHRESH_ERODE} extension, you
-clearly see how the non-blank tiles around NGC 5195 have a gradient. As one
-line of attack against discarding too much signal below the threshold,
-NoiseChisel rejects outlier tiles. Go forward by three extensions to
-@code{VALUE1_NO_OUTLIER} and you will see that many of the tiles over the
-galaxy have been removed in this step. For more on the outlier rejection
-algorithm, see the latter half of @ref{Quantifying signal in a tile}.
-
-However, the default outlier rejection parameters weren't enough, and when
-you play with the color-bar, you still see a strong gradient around the
-outer tidal feature of the galaxy. You have two strategies for fixing this
-problem: 1) Increase the tile size to get more accurate measurements of
-skewness. 2) Strengthen the outlier rejection parameters to discard more of
-the tiles with signal. Fortunately in this image we have a sufficiently
-large region on the right of the image that the galaxy doesn't extend
-to. So we can use the more robust first solution. In situations where this
-doesn't happen (for example if the field of view in this image was shifted
-to have more of M51 and less sky) you are limited to a combination of the
-two solutions or just to the second solution.
+Notice how this option doesn't allow NoiseChisel to finish.
+NoiseChisel aborted after finding and applying the quantile thresholds.
+When you call any of NoiseChisel's @option{--check*} options, by default, it
will abort as soon as all the check steps have been written in the check file
(a multi-extension FITS file).
+This allows you to focus on the problem you wanted to check as soon as
possible (you can disable this feature with the @option{--continueaftercheck}
option).
+
+To optimize the threshold-related settings for this image, let's playing with
this quantile threshold check image a little.
+Don't forget that ``@emph{Good statistical analysis is not a purely routine
matter, and generally calls for more than one pass through the computer}''
(Anscombe 1973, see @ref{Science and its tools}).
+A good scientist must have a good understanding of her tools to make a
meaningful analysis.
+So don't hesitate in playing with the default configuration and reviewing the
manual when you have a new dataset in front of you.
+Robust data analysis is an art, therefore a good scientist must first be a
good artist.
+
+The first extension of @file{r_qthresh.fits} (@code{CONVOLVED}) is the
convolved input image where the threshold(s) is(are) defined and applied.
+For more on the effect of convolution and thresholding, see Sections 3.1.1 and
3.1.2 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
+The second extension (@code{QTHRESH_ERODE}) has a blank value for all the
pixels of any tile that was identified as having significant signal.
+The next two extensions (@code{QTHRESH_NOERODE} and @code{QTHRESH_EXPAND}) are
the other two quantile thresholds that are necessary in NoiseChisel's later
steps.
+Every step in this file is repeated on the three thresholds.
+
+Play a little with the color bar of the @code{QTHRESH_ERODE} extension, you
clearly see how the non-blank tiles around NGC 5195 have a gradient.
+As one line of attack against discarding too much signal below the threshold,
NoiseChisel rejects outlier tiles.
+Go forward by three extensions to @code{VALUE1_NO_OUTLIER} and you will see
that many of the tiles over the galaxy have been removed in this step.
+For more on the outlier rejection algorithm, see the latter half of
@ref{Quantifying signal in a tile}.
+
+However, the default outlier rejection parameters weren't enough, and when you
play with the color-bar, you still see a strong gradient around the outer tidal
feature of the galaxy.
+You have two strategies for fixing this problem: 1) Increase the tile size to
get more accurate measurements of skewness.
+2) Strengthen the outlier rejection parameters to discard more of the tiles
with signal.
+Fortunately in this image we have a sufficiently large region on the right of
the image that the galaxy doesn't extend to.
+So we can use the more robust first solution.
+In situations where this doesn't happen (for example if the field of view in
this image was shifted to have more of M51 and less sky) you are limited to a
combination of the two solutions or just to the second solution.
@cartouche
@noindent
-@strong{Skipping convolution for faster tests:} The slowest step of
-NoiseChisel is the convolution of the input dataset. Therefore when your
-dataset is large (unlike the one in this test), and you are not changing
-the input dataset or kernel in multiple runs (as in the tests of this
-tutorial), it is faster to do the convolution separately once (using
-@ref{Convolve}) and use NoiseChisel's @option{--convolved} option to
-directly feed the convolved image and avoid convolution. For more on
-@option{--convolved}, see @ref{NoiseChisel input}.
+@strong{Skipping convolution for faster tests:} The slowest step of
NoiseChisel is the convolution of the input dataset.
+Therefore when your dataset is large (unlike the one in this test), and you
are not changing the input dataset or kernel in multiple runs (as in the tests
of this tutorial), it is faster to do the convolution separately once (using
@ref{Convolve}) and use NoiseChisel's @option{--convolved} option to directly
feed the convolved image and avoid convolution.
+For more on @option{--convolved}, see @ref{NoiseChisel input}.
@end cartouche
-To identify the skewness caused by the flat NGC 5195 and M51 tidal features
-on the tiles under it, we thus have to choose a tile size that is larger
-than the gradient of the signal. Let's try a tile size of 75 by 75 pixels:
+To identify the skewness caused by the flat NGC 5195 and M51 tidal features on
the tiles under it, we thus have to choose a tile size that is larger than the
gradient of the signal.
+Let's try a tile size of 75 by 75 pixels:
@example
$ astnoisechisel r.fits -h0 --tilesize=75,75 --checkqthresh
@end example
-You can clearly see the effect of this increased tile size: the tiles are
-much larger and when you look into @code{VALUE1_NO_OUTLIER}, you see that
-almost all the previous tiles under the galaxy have been discarded and we
-only have a few tiles on the edge with a gradient. So let's define a smore
-strict condition to keep tiles:
+You can clearly see the effect of this increased tile size: the tiles are much
larger and when you look into @code{VALUE1_NO_OUTLIER}, you see that almost all
the previous tiles under the galaxy have been discarded and we only have a few
tiles on the edge with a gradient.
+So let's define a more strict condition to keep tiles:
@example
$ astnoisechisel r.fits -h0 --tilesize=75,75 --meanmedqdiff=0.001 \
--checkqthresh
@end example
-After constraining @code{--meanmedqdiff}, NoiseChisel stopped with a
-different error. Please read it: at the start, it says that only 6 tiles
-passed the constraint while you have asked for 9. The @file{r_qthresh.fits}
-image also only has 8 extensions (not the original 15). Take a look at the
-initially selected tiles and those after outlier rejection. You can see the
-place of the tiles that passed. They seem to be in the good place (very far
-away from the M51 group and its tidal feature. Using the 6 nearest
-neighbors is also not too bad. So let's decrease the number of neighboring
-tiles for interpolation so NoiseChisel can continue:
+After constraining @code{--meanmedqdiff}, NoiseChisel stopped with a different
error.
+Please read it: at the start, it says that only 6 tiles passed the constraint
while you have asked for 9.
+The @file{r_qthresh.fits} image also only has 8 extensions (not the original
15).
+Take a look at the initially selected tiles and those after outlier rejection.
+You can see the place of the tiles that passed.
+They seem to be in the good place (very far away from the M51 group and its
tidal feature.
+Using the 6 nearest neighbors is also not too bad.
+So let's decrease the number of neighboring tiles for interpolation so
NoiseChisel can continue:
@example
$ astnoisechisel r.fits -h0 --tilesize=75,75 --meanmedqdiff=0.001 \
--interpnumngb=6 --checkqthresh
@end example
-The next group of extensions (those ending with @code{_INTERP}), give a
-value to all blank tiles based on the nearest tiles with a measurement. The
-following group of extensions (ending with @code{_SMOOTH}) have smoothed
-the interpolated image to avoid sharp cuts on tile edges. Inspecting
-@code{THRESH1_SMOOTH}, you can see that there is no longer any significant
-gradient and no major signature of NGC 5195 exists.
+The next group of extensions (those ending with @code{_INTERP}), give a value
to all blank tiles based on the nearest tiles with a measurement.
+The following group of extensions (ending with @code{_SMOOTH}) have smoothed
the interpolated image to avoid sharp cuts on tile edges.
+Inspecting @code{THRESH1_SMOOTH}, you can see that there is no longer any
significant gradient and no major signature of NGC 5195 exists.
-We can now remove @option{--checkqthresh} and let NoiseChisel proceed with
-its detection. Also, similar to the argument in @ref{NoiseChisel
-optimization for detection}, in the command above, we set the
-pseudo-detection signal-to-noise ratio quantile (@option{--snquant}) to
-0.95.
+We can now remove @option{--checkqthresh} and let NoiseChisel proceed with its
detection.
+Also, similar to the argument in @ref{NoiseChisel optimization for detection},
in the command above, we set the pseudo-detection signal-to-noise ratio
quantile (@option{--snquant}) to 0.95.
@example
$ rm r_qthresh.fits
@@ -4560,40 +3494,27 @@ $ astnoisechisel r.fits -h0 --tilesize=75,75
--meanmedqdiff=0.001 \
--interpnumngb=6 --snquant=0.95
@end example
-Looking at the @code{DETECTIONS} extension of NoiseChisel's output, we see
-the right-ward edges in particular have many holes that are fully
-surrounded by signal and the signal stretches out in the noise very thinly
-(the size of the holes increases as we go out). This suggests that there is
-still signal that can be detected. You can confirm this guess by looking at
-the @code{SKY} extension to see that indeed, there is a clear footprint of
-the M51 group in the Sky image (which is not good!). Therefore, we should
-dig deeper into the noise.
+Looking at the @code{DETECTIONS} extension of NoiseChisel's output, we see the
right-ward edges in particular have many holes that are fully surrounded by
signal and the signal stretches out in the noise very thinly (the size of the
holes increases as we go out).
+This suggests that there is still signal that can be detected.
+You can confirm this guess by looking at the @code{SKY} extension to see that
indeed, there is a clear footprint of the M51 group in the Sky image (which is
not good!).
+Therefore, we should dig deeper into the noise.
-With the @option{--detgrowquant} option, NoiseChisel will use the
-detections as seeds and grow them in to the noise. Its value is the
-ultimate limit of the growth in units of quantile (between 0 and
-1). Therefore @option{--detgrowquant=1} means no growth and
-@option{--detgrowquant=0.5} means an ultimate limit of the Sky level (which
-is usually too much!). Try running the previous command with various values
-(from 0.6 to higher values) to see this option's effect. For this
-particularly huge galaxy (with signal that extends very gradually into the
-noise), we'll set it to @option{0.65}:
+With the @option{--detgrowquant} option, NoiseChisel will use the detections
as seeds and grow them in to the noise.
+Its value is the ultimate limit of the growth in units of quantile (between 0
and 1).
+Therefore @option{--detgrowquant=1} means no growth and
@option{--detgrowquant=0.5} means an ultimate limit of the Sky level (which is
usually too much!).
+Try running the previous command with various values (from 0.6 to higher
values) to see this option's effect.
+For this particularly huge galaxy (with signal that extends very gradually
into the noise), we'll set it to @option{0.65}:
@example
$ astnoisechisel r.fits -h0 --tilesize=75,75 --meanmedqdiff=0.001 \
--interpnumngb=6 --snquant=0.95 --detgrowquant=0.65
@end example
-Beyond this level (smaller @option{--detgrowquant} values), you see the
-smaller background galaxies starting to create thin spider-leg-like
-features, showing that we are following correlated noise for too much.
+Beyond this level (smaller @option{--detgrowquant} values), you see the
smaller background galaxies starting to create thin spider-leg-like features,
showing that we are following correlated noise for too much.
-Now, when you look at the @code{DETECTIONS} extension, you see the wings of
-the galaxy being detected much farther out, But you also see many holes
-which are clearly just caused by noise. After growing the objects,
-NoiseChisel also allows you to fill such holes when they are smaller than a
-certain size through the @option{--detgrowmaxholesize} option. In this
-case, a maximum area/size of 10,000 pixels seems to be good:
+Now, when you look at the @code{DETECTIONS} extension, you see the wings of
the galaxy being detected much farther out, But you also see many holes which
are clearly just caused by noise.
+After growing the objects, NoiseChisel also allows you to fill such holes when
they are smaller than a certain size through the @option{--detgrowmaxholesize}
option.
+In this case, a maximum area/size of 10,000 pixels seems to be good:
@example
$ astnoisechisel r.fits -h0 --tilesize=75,75 --meanmedqdiff=0.001 \
@@ -4601,17 +3522,13 @@ $ astnoisechisel r.fits -h0 --tilesize=75,75
--meanmedqdiff=0.001 \
--detgrowmaxholesize=10000
@end example
-The detection looks good now, but when you look in to the @code{SKY}
-extension, you still clearly still see a footprint of the galaxy. We'll
-leave it as an exercise for you to play with NoiseChisel further and
-improve the detected pixels.
+The detection looks good now, but when you look in to the @code{SKY}
extension, you still clearly still see a footprint of the galaxy.
+We'll leave it as an exercise for you to play with NoiseChisel further and
improve the detected pixels.
-So, we'll just stop with one last tool NoiseChisel gives you to get a
-slightly better estimation of the Sky: @option{--minskyfrac}. On each tile,
-NoiseChisel will only measure the Sky-level if the fraction of undetected
-pixels is larger than the value given to this option. To avoid the edges of
-the galaxy, we'll set it to @option{0.9}. Therefore, tiles that are covered
-by detected pixels for more than @mymath{10\%} of their area are ignored.
+So, we'll just stop with one last tool NoiseChisel gives you to get a slightly
better estimation of the Sky: @option{--minskyfrac}.
+On each tile, NoiseChisel will only measure the Sky-level if the fraction of
undetected pixels is larger than the value given to this option.
+To avoid the edges of the galaxy, we'll set it to @option{0.9}.
+Therefore, tiles that are covered by detected pixels for more than
@mymath{10\%} of their area are ignored.
@example
$ astnoisechisel r.fits -h0 --tilesize=75,75 --meanmedqdiff=0.001 \
@@ -4619,20 +3536,16 @@ $ astnoisechisel r.fits -h0 --tilesize=75,75
--meanmedqdiff=0.001 \
--detgrowmaxholesize=10000 --minskyfrac=0.9
@end example
-The footprint of the galaxy still exists in the @code{SKY} extension, but
-it has decreased in significance now. Let's calculate the significance of
-the undetected gradient, in units of noise. Since the gradient is roughly
-along the horizontal axis, we'll collapse the image along the second
-(vertical) FITS dimension to have a 1D array (a table column, see its
-values with the second command).
+The footprint of the galaxy still exists in the @code{SKY} extension, but it
has decreased in significance now.
+Let's calculate the significance of the undetected gradient, in units of noise.
+Since the gradient is roughly along the horizontal axis, we'll collapse the
image along the second (vertical) FITS dimension to have a 1D array (a table
column, see its values with the second command).
@example
$ astarithmetic r_detected.fits 2 collapse-mean -hSKY -ocollapsed.fits
$ asttable collapsed.fits
@end example
-We can now calculate the minimum and maximum values of this array and
-define their difference (in units of noise) as the gradient:
+We can now calculate the minimum and maximum values of this array and define
their difference (in units of noise) as the gradient:
@example
$ grad=$(astarithmetic r_detected.fits 2 collapse-mean set-i \
@@ -4643,89 +3556,57 @@ $ echo $std
$ astarithmetic -q $grad $std /
@end example
-The undetected gradient (@code{grad} above) is thus roughly a quarter of
-the noise. But don't forget that this is per-pixel: individually its small,
-but it extends over millions of pixels, so the total flux may still be
-relevant.
-
-When looking at the raw input shallow image, you don't see anything so far
-out of the galaxy. You might just think that ``this is all noise, I have
-just dug too deep and I'm following systematics''! If you feel like this,
-have a look at the deep images of this system in
-@url{https://arxiv.org/abs/1501.04599, Watkins et al. [2015]}, or a 12 hour
-deep image of this system (with a 12-inch telescope):
-@url{https://i.redd.it/jfqgpqg0hfk11.jpg}@footnote{The image is taken from
-this Reddit discussion:
-@url{https://www.reddit.com/r/Astronomy/comments/9d6x0q/12_hours_of_exposure_on_the_whirlpool_galaxy/}}.
In
-these deepr images you see that the outer edges of the M51 group clearly
-follow this exact structure, below in @ref{Achieved surface brightness
-level}, we'll measure the exact level.
-
-As the gradient in the @code{SKY} extension shows, and the deep images
-cited above confirm, the galaxy's signal extends even beyond this. But this
-is already far deeper than what most (if not all) other tools can detect.
-Therefore, we'll stop configuring NoiseChisel at this point in the tutorial
-and let you play with it a little more while reading more about it in
-@ref{NoiseChisel}.
-
-After finishing this tutorial please go through the NoiseChisel paper and
-its options and play with them to further decrease the gradient. This will
-greatly help you get a good feeling of the options. When you do find a
-better configuration, please send it to us and we'll mention your name here
-with your suggested configuration. Don't forget that good data analysis is
-an art, so like a sculptor, master your chisel for a good result.
+The undetected gradient (@code{grad} above) is thus roughly a quarter of the
noise.
+But don't forget that this is per-pixel: individually its small, but it
extends over millions of pixels, so the total flux may still be relevant.
+
+When looking at the raw input shallow image, you don't see anything so far out
of the galaxy.
+You might just think that ``this is all noise, I have just dug too deep and
I'm following systematics''! If you feel like this, have a look at the deep
images of this system in @url{https://arxiv.org/abs/1501.04599, Watkins et al.
[2015]}, or a 12 hour deep image of this system (with a 12-inch telescope):
@url{https://i.redd.it/jfqgpqg0hfk11.jpg}@footnote{The image is taken from this
Reddit discussion:
@url{https://www.reddit.com/r/Astronomy/comments/9d6x0q/12_hours_of_exposure_on_the_wh
[...]
+In these deeper images you see that the outer edges of the M51 group clearly
follow this exact structure, below in @ref{Achieved surface brightness level},
we'll measure the exact level.
+
+As the gradient in the @code{SKY} extension shows, and the deep images cited
above confirm, the galaxy's signal extends even beyond this.
+But this is already far deeper than what most (if not all) other tools can
detect.
+Therefore, we'll stop configuring NoiseChisel at this point in the tutorial
and let you play with it a little more while reading more about it in
@ref{NoiseChisel}.
+
+After finishing this tutorial please go through the NoiseChisel paper and its
options and play with them to further decrease the gradient.
+This will greatly help you get a good feeling of the options.
+When you do find a better configuration, please send it to us and we'll
mention your name here with your suggested configuration.
+Don't forget that good data analysis is an art, so like a sculptor, master
your chisel for a good result.
@cartouche
@noindent
-@strong{This NoiseChisel configuration is NOT GENERIC:} Don't use this
-configuration blindly on another image. As you saw above, the reason we
-chose this particular configuration for NoiseChisel to detect the wings of
-the M51 group was strongly influenced by the noise properties of this
-particular image. So as long as your image noise has similar properties
-(from the same data-reduction step of the same database), you can use this
-configuration on any image. For images from other instruments, or
-higher-level/reduced SDSS products, please follow a similar logic to what
-was presented here and find the best configuation yourself.
+@strong{This NoiseChisel configuration is NOT GENERIC:} Don't use this
configuration blindly on another image.
+As you saw above, the reason we chose this particular configuration for
NoiseChisel to detect the wings of the M51 group was strongly influenced by the
noise properties of this particular image.
+So as long as your image noise has similar properties (from the same
data-reduction step of the same database), you can use this configuration on
any image.
+For images from other instruments, or higher-level/reduced SDSS products,
please follow a similar logic to what was presented here and find the best
configuration yourself.
@end cartouche
@cartouche
@noindent
-@strong{Smart NoiseChisel:} As you saw during this section, there is a
-clear logic behind the optimal paramter value for each dataset. Therfore,
-we plan to capabilities to (optionally) automate some of the choices made
-here based on the actual dataset, please join us in doing this if you are
-interested. However, given the many problems in existing ``smart''
-solutions, such automatic changing of the configuration may cause more
-problems than they solve. So even when they are implemented, we would
-strongly recommend quality checks for a robust analysis.
+@strong{Smart NoiseChisel:} As you saw during this section, there is a clear
logic behind the optimal parameter value for each dataset.
+Therefore, we plan to capabilities to (optionally) automate some of the
choices made here based on the actual dataset, please join us in doing this if
you are interested.
+However, given the many problems in existing ``smart'' solutions, such
automatic changing of the configuration may cause more problems than they solve.
+So even when they are implemented, we would strongly recommend quality checks
for a robust analysis.
@end cartouche
@node Achieved surface brightness level, , NoiseChisel optimization,
Detecting large extended targets
@subsection Achieved surface brightness level
-In @ref{NoiseChisel optimization} we showed how to customize NoiseChisel
-for a single-exposure SDSS image of the M51 group. let's measure how deep
-we carved the signal out of noise. For this measurement, we'll need to
-estimate the average flux on the outer edges of the detection. Fortunately
-all this can be done with a few simple commands (and no higher-level
-language mini-environments like Python or IRAF) using @ref{Arithmetic} and
-@ref{MakeCatalog}.
+In @ref{NoiseChisel optimization} we showed how to customize NoiseChisel for a
single-exposure SDSS image of the M51 group.
+Let's measure how deep we carved the signal out of noise.
+For this measurement, we'll need to estimate the average flux on the outer
edges of the detection.
+Fortunately all this can be done with a few simple commands (and no
higher-level language mini-environments like Python or IRAF) using
@ref{Arithmetic} and @ref{MakeCatalog}.
@cindex Opening
-First, let's separate each detected region, or give a unique label/counter
-to all the connected pixels of NoiseChisel's detection map:
+First, let's separate each detected region, or give a unique label/counter to
all the connected pixels of NoiseChisel's detection map:
@example
$ det="r_detected.fits -hDETECTIONS"
$ astarithmetic $det 2 connected-components -olabeled.fits
@end example
-You can find the the label of the main galaxy visually (by opening the
-image and hovering your mouse over the M51 group's label). But to have a
-little more fun, lets do this automatically. The M51 group detection is by
-far the largest detection in this image, this allows us to find the
-ID/label that corresponds to it. We'll first run MakeCatalog to find the
-area of all the detections, then we'll use AWK to find the ID of the
-largest object and keep it as a shell variable (@code{id}):
+You can find the label of the main galaxy visually (by opening the image and
hovering your mouse over the M51 group's label).
+But to have a little more fun, lets do this automatically.
+The M51 group detection is by far the largest detection in this image, this
allows us to find the ID/label that corresponds to it.
+We'll first run MakeCatalog to find the area of all the detections, then we'll
use AWK to find the ID of the largest object and keep it as a shell variable
(@code{id}):
@example
$ astmkcatalog labeled.fits --ids --geoarea -h1 -ocat.txt
@@ -4733,10 +3614,9 @@ $ id=$(awk '!/^#/@{if($2>max) @{id=$1; max=$2@}@}
END@{print id@}' cat.txt)
$ echo $id
@end example
-To separate the outer edges of the detections, we'll need to ``erode'' the
-M51 group detection. We'll erode thre times (to have more pixels and thus
-less scatter), using a maximum connectivity of 2 (8-connected
-neighbors). We'll then save the output in @file{eroded.fits}.
+To separate the outer edges of the detections, we'll need to ``erode'' the M51
group detection.
+We'll erode three times (to have more pixels and thus less scatter), using a
maximum connectivity of 2 (8-connected neighbors).
+We'll then save the output in @file{eroded.fits}.
@example
$ astarithmetic labeled.fits $id eq 2 erode 2 erode 2 erode \
@@ -4744,50 +3624,37 @@ $ astarithmetic labeled.fits $id eq 2 erode 2 erode 2
erode \
@end example
@noindent
-In @file{labeled.fits}, we can now set all the 1-valued pixels of
-@file{eroded.fits} to 0 using Arithmetic's @code{where} operator added to
-the previous command. We'll need the pixels of the M51 group in
-@code{labeled.fits} two times: once to do the erosion, another time to find
-the outer pixel layer. To do this (and be efficient and more readable)
-we'll use the @code{set-i} operator. In the command below, it will
-save/set/name the pixels of the M51 group as the `@code{i}'. In this way we
-can use it any number of times afterwards, while only reading it from disk
-and finding M51's pixels once.
+In @file{labeled.fits}, we can now set all the 1-valued pixels of
@file{eroded.fits} to 0 using Arithmetic's @code{where} operator added to the
previous command.
+We'll need the pixels of the M51 group in @code{labeled.fits} two times: once
to do the erosion, another time to find the outer pixel layer.
+To do this (and be efficient and more readable) we'll use the @code{set-i}
operator.
+In the command below, it will save/set/name the pixels of the M51 group as the
`@code{i}'.
+In this way we can use it any number of times afterwards, while only reading
it from disk and finding M51's pixels once.
@example
$ astarithmetic labeled.fits $id eq set-i i \
i 2 erode 2 erode 2 erode 0 where -oedge.fits
@end example
-Open the image and have a look. You'll see that the detected edge of the
-M51 group is now clearly visible. You can use @file{edge.fits} to mark
-(set to blank) this boundary on the input image and get a visual feeling of
-how far it extends:
+Open the image and have a look.
+You'll see that the detected edge of the M51 group is now clearly visible.
+You can use @file{edge.fits} to mark (set to blank) this boundary on the input
image and get a visual feeling of how far it extends:
@example
$ astarithmetic r.fits edge.fits nan where -ob-masked.fits -h0
@end example
-To quantify how deep we have detected the low-surface brightness regions,
-we'll use the command below. In short it just divides all the non-zero
-pixels of @file{edge.fits} in the Sky subtracted input (first extension
-of NoiseChisel's output) by the pixel standard deviation of the same
-pixel. This will give us a signal-to-noise ratio image. The mean value of
-this image shows the level of surface brightness that we have achieved.
-
-You can also break the command below into multiple calls to Arithmetic and
-create temporary files to understand it better. However, if you have a look
-at @ref{Reverse polish notation} and @ref{Arithmetic operators}, you should
-be able to easily understand what your computer does when you run this
-command@footnote{@file{edge.fits} (extension @code{1}) is a binary (0
-or 1 valued) image. Applying the @code{not} operator on it, just flips all
-its pixels. Through the @code{where} operator, we are setting all the newly
-1-valued pixels in @file{r_detected.fits} (extension @code{INPUT-NO-SKY})
-to NaN/blank. In the second line, we are dividing all the non-blank values
-by @file{r_detected.fits} (extension @code{SKY_STD}). This gives the
-signal-to-noise ratio for each of the pixels on the boundary. Finally, with
-the @code{meanvalue} operator, we are taking the mean value of all the
-non-blank pixels and reporting that as a single number.}.
+To quantify how deep we have detected the low-surface brightness regions,
we'll use the command below.
+In short it just divides all the non-zero pixels of @file{edge.fits} in the
Sky subtracted input (first extension of NoiseChisel's output) by the pixel
standard deviation of the same pixel.
+This will give us a signal-to-noise ratio image.
+The mean value of this image shows the level of surface brightness that we
have achieved.
+
+You can also break the command below into multiple calls to Arithmetic and
create temporary files to understand it better.
+However, if you have a look at @ref{Reverse polish notation} and
@ref{Arithmetic operators}, you should be able to easily understand what your
computer does when you run this command@footnote{@file{edge.fits} (extension
@code{1}) is a binary (0 or 1 valued) image.
+Applying the @code{not} operator on it, just flips all its pixels.
+Through the @code{where} operator, we are setting all the newly 1-valued
pixels in @file{r_detected.fits} (extension @code{INPUT-NO-SKY}) to NaN/blank.
+In the second line, we are dividing all the non-blank values by
@file{r_detected.fits} (extension @code{SKY_STD}).
+This gives the signal-to-noise ratio for each of the pixels on the boundary.
+Finally, with the @code{meanvalue} operator, we are taking the mean value of
all the non-blank pixels and reporting that as a single number.}.
@example
$ edge="edge.fits -h1"
@@ -4798,14 +3665,10 @@ $ astarithmetic $skysub $skystd / $edge not nan where
\
@end example
@cindex Surface brightness
-We have thus detected the wings of the M51 group down to roughly 1/4th of
-the noise level in this image! But the signal-to-noise ratio is a relative
-measurement. Let's also measure the depth of our detection in absolute
-surface brightness units; or magnitudes per square arcseconds. To find out,
-we'll first need to calculate how many pixels of this image are in one
-arcsecond-squared. Fortunately the world coordinate system (or WCS) meta
-data of Gnuastro's output FITS files (in particular the @code{CDELT}
-keywords) give us this information.
+We have thus detected the wings of the M51 group down to roughly 1/4th of the
noise level in this image! But the signal-to-noise ratio is a relative
measurement.
+Let's also measure the depth of our detection in absolute surface brightness
units; or magnitudes per square arcseconds.
+To find out, we'll first need to calculate how many pixels of this image are
in one arcsecond-squared.
+Fortunately the world coordinate system (or WCS) meta data of Gnuastro's
output FITS files (in particular the @code{CDELT} keywords) give us this
information.
@example
$ pixscale=$(astfits r_detected.fits -h1 \
@@ -4814,9 +3677,8 @@ $ echo $pixscale
@end example
@noindent
-Note that we multiplied the value by 3600 so we work in units of
-arc-seconds not degrees. Now, let's calculate the average sky-subtracted
-flux in the border region per pixel.
+Note that we multiplied the value by 3600 so we work in units of arc-seconds
not degrees.
+Now, let's calculate the average sky-subtracted flux in the border region per
pixel.
@example
$ f=$(astarithmetic r_detected.fits edge.fits not nan where set-i \
@@ -4825,15 +3687,11 @@ $ echo $f
@end example
@noindent
-We can just multiply the two to get the average flux on this border in one
-arcsecond squared. We also have the r-band SDSS zeropoint
-magnitude@footnote{From
-@url{http://classic.sdss.org/dr7/algorithms/fluxcal.html}} to be
-24.80. Therefore we can get the surface brightness of the outer edge (in
-magnitudes per arcsecond squared) using the following command. Just note
-that @code{log} in AWK is in base-2 (not 10), and that AWK doesn't have a
-@code{log10} operator. So we'll do an extra division by @code{log(10)} to
-correct for this.
+We can just multiply the two to get the average flux on this border in one
arcsecond squared.
+We also have the r-band SDSS zeropoint magnitude@footnote{From
@url{http://classic.sdss.org/dr7/algorithms/fluxcal.html}} to be 24.80.
+Therefore we can get the surface brightness of the outer edge (in magnitudes
per arcsecond squared) using the following command.
+Just note that @code{log} in AWK is in base-2 (not 10), and that AWK doesn't
have a @code{log10} operator.
+So we'll do an extra division by @code{log(10)} to correct for this.
@example
$ z=24.80
@@ -4841,26 +3699,16 @@ $ echo "$pixscale $f $z" | awk '@{print
-2.5*log($1*$2)/log(10)+$3@}'
--> 28.2989
@end example
-On a single-exposure SDSS image, we have reached a surface brightness limit
-fainter than 28 magnitudes per arcseconds squared!
+On a single-exposure SDSS image, we have reached a surface brightness limit
fainter than 28 magnitudes per arcseconds squared!
-In interpreting this value, you should just have in mind that NoiseChisel
-works based on the contiguity of signal in the pixels. Therefore the larger
-the object, the deeper NoiseChisel can carve it out of the noise. In other
-words, this reported depth, is only for this particular object and dataset,
-processed with this particular NoiseChisel configuration: if the M51 group
-in this image was larger/smaller than this, or if the image was
-larger/smaller, or if we had used a different configuration, we would go
-deeper/shallower.
+In interpreting this value, you should just have in mind that NoiseChisel
works based on the contiguity of signal in the pixels.
+Therefore the larger the object, the deeper NoiseChisel can carve it out of
the noise.
+In other words, this reported depth, is only for this particular object and
dataset, processed with this particular NoiseChisel configuration: if the M51
group in this image was larger/smaller than this, or if the image was
larger/smaller, or if we had used a different configuration, we would go
deeper/shallower.
-To avoid typing all these options every time you run NoiseChisel on this
-image, you can use Gnuastro's configuration files, see @ref{Configuration
-files}. For an applied example of setting/using them, see @ref{Option
-management and configuration files}.
+To avoid typing all these options every time you run NoiseChisel on this
image, you can use Gnuastro's configuration files, see @ref{Configuration
files}.
+For an applied example of setting/using them, see @ref{Option management and
configuration files}.
-To continue your analysis of such datasets with extended emission, you can
-use @ref{Segment} to identify all the ``clumps'' over the diffuse regions:
-background galaxies and foreground stars.
+To continue your analysis of such datasets with extended emission, you can use
@ref{Segment} to identify all the ``clumps'' over the diffuse regions:
background galaxies and foreground stars.
@example
$ astsegment r_detected.fits
@@ -4868,25 +3716,17 @@ $ astsegment r_detected.fits
@cindex DS9
@cindex SAO DS9
-Open the output @file{r_detected_segmented.fits} as a multi-extension data
-cube like before and flip through the first and second extensions to see
-the detected clumps (all pixels with a value larger than 1). To optimize
-the parameters and make sure you have detected what you wanted, its highly
-recommended to visually inspect the detected clumps on the input image.
-
-For visual inspection, you can make a simple shell script like below. It
-will first call MakeCatalog to estimate the positions of the clumps, then
-make an SAO ds9 region file and open ds9 with the image and region
-file. Recall that in a shell script, the numeric variables (like @code{$1},
-@code{$2}, and @code{$3} in the example below) represent the arguments
-given to the script. But when used in the AWK arguments, they refer to
-column numbers.
-
-To create the shell script, using your favorite text editor, put the
-contents below into a file called @file{check-clumps.sh}. Recall that
-everything after a @code{#} is just comments to help you understand the
-command (so read them!). Also note that if you are copying from the PDF
-version of this book, fix the single quotes in the AWK command.
+Open the output @file{r_detected_segmented.fits} as a multi-extension data
cube like before and flip through the first and second extensions to see the
detected clumps (all pixels with a value larger than 1).
+To optimize the parameters and make sure you have detected what you wanted,
its highly recommended to visually inspect the detected clumps on the input
image.
+
+For visual inspection, you can make a simple shell script like below.
+It will first call MakeCatalog to estimate the positions of the clumps, then
make an SAO ds9 region file and open ds9 with the image and region file.
+Recall that in a shell script, the numeric variables (like @code{$1},
@code{$2}, and @code{$3} in the example below) represent the arguments given to
the script.
+But when used in the AWK arguments, they refer to column numbers.
+
+To create the shell script, using your favorite text editor, put the contents
below into a file called @file{check-clumps.sh}.
+Recall that everything after a @code{#} is just comments to help you
understand the command (so read them!).
+Also note that if you are copying from the PDF version of this book, fix the
single quotes in the AWK command.
@example
#! /bin/bash
@@ -4915,9 +3755,8 @@ rm $1"_cat.fits" $1.reg
@end example
@noindent
-Finally, you just have to activate the script's executable flag with the
-command below. This will enable you to directly/easily call the script as a
-command.
+Finally, you just have to activate the script's executable flag with the
command below.
+This will enable you to directly/easily call the script as a command.
@example
$ chmod +x check-clumps.sh
@@ -4925,72 +3764,47 @@ $ chmod +x check-clumps.sh
@cindex AWK
@cindex GNU AWK
-This script doesn't expect the @file{.fits} suffix of the input's filename
-as the first argument. Because the script produces intermediate files (a
-catalog and DS9 region file, which are later deleted). However, we don't
-want multiple instances of the script (on different files in the same
-directory) to collide (read/write to the same intermediate
-files). Therefore, we have used suffixes added to the input's name to
-identify the intermediate files. Note how all the @code{$1} instances in
-the commands (not within the AWK command@footnote{In AWK, @code{$1} refers
-to the first column, while in the shell script, it refers to the first
-argument.}) are followed by a suffix. If you want to keep the intermediate
-files, put a @code{#} at the start of the last line.
-
-The few, but high-valued, bright pixels in the central parts of the
-galaxies can hinder easy visual inspection of the fainter parts of the
-image. With the second and third arguments to this script, you can set the
-numerical values of the color map (first is minimum/black, second is
-maximum/white). You can call this script with any@footnote{Some
-modifications are necessary based on the input dataset: depending on the
-dynamic range, you have to adjust the second and third arguments. But more
-importantly, depending on the dataset's world coordinate system, you have
-to change the region @code{width}, in the AWK command. Otherwise the circle
-regions can be too small/large.} output of Segment (when
-@option{--rawoutput} is @emph{not} used) with a command like this:
+This script doesn't expect the @file{.fits} suffix of the input's filename as
the first argument.
+Because the script produces intermediate files (a catalog and DS9 region file,
which are later deleted).
+However, we don't want multiple instances of the script (on different files in
the same directory) to collide (read/write to the same intermediate files).
+Therefore, we have used suffixes added to the input's name to identify the
intermediate files.
+Note how all the @code{$1} instances in the commands (not within the AWK
command@footnote{In AWK, @code{$1} refers to the first column, while in the
shell script, it refers to the first argument.}) are followed by a suffix.
+If you want to keep the intermediate files, put a @code{#} at the start of the
last line.
+
+The few, but high-valued, bright pixels in the central parts of the galaxies
can hinder easy visual inspection of the fainter parts of the image.
+With the second and third arguments to this script, you can set the numerical
values of the color map (first is minimum/black, second is maximum/white).
+You can call this script with any@footnote{Some modifications are necessary
based on the input dataset: depending on the dynamic range, you have to adjust
the second and third arguments.
+But more importantly, depending on the dataset's world coordinate system, you
have to change the region @code{width}, in the AWK command.
+Otherwise the circle regions can be too small/large.} output of Segment (when
@option{--rawoutput} is @emph{not} used) with a command like this:
@example
$ ./check-clumps.sh r_detected_segmented -0.1 2
@end example
-Go ahead and run this command. You will see the intermediate processing
-being done and finally it opens SAO DS9 for you with the regions
-superimposed on all the extensions of Segment's output. The script will
-only finish (and give you control of the command-line) when you close
-DS9. If you need your access to the command-line before closing DS9, add a
-@code{&} after the end of the command above.
+Go ahead and run this command.
+You will see the intermediate processing being done and finally it opens SAO
DS9 for you with the regions superimposed on all the extensions of Segment's
output.
+The script will only finish (and give you control of the command-line) when
you close DS9.
+If you need your access to the command-line before closing DS9, add a @code{&}
after the end of the command above.
@cindex Purity
@cindex Completeness
-While DS9 is open, slide the dynamic range (values for black and white, or
-minimum/maximum values in different color schemes) and zoom into various
-regions of the M51 group to see if you are satisfied with the detected
-clumps. Don't forget that through the ``Cube'' window that is opened along
-with DS9, you can flip through the extensions and see the actual clumps
-also. The questions you should be asking your self are these: 1) Which real
-clumps (as you visually @emph{feel}) have been missed? In other words, is
-the @emph{completeness} good? 2) Are there any clumps which you @emph{feel}
-are false? In other words, is the @emph{purity} good?
-
-Note that completeness and purity are not independent of each other, they
-are anti-correlated: the higher your purity, the lower your completeness
-and vice-versa. You can see this by playing with the purity level using the
-@option{--snquant} option. Run Segment as shown above again with @code{-P}
-and see its default value. Then increase/decrease it for higher/lower
-purity and check the result as before. You will see that if you want the
-best purity, you have to sacrifice completeness and vice versa.
-
-One interesting region to inspect in this image is the many bright peaks
-around the central parts of M51. Zoom into that region and inspect how many
-of them have actually been detected as true clumps. Do you have a good
-balance between completeness and purity? Also look out far into the wings
-of the group and inspect the completeness and purity there.
-
-An easier way to inspect completeness (and only completeness) is to mask all
-the pixels detected as clumps and visually inspecting the rest of the
-pixels. You can do this using Arithmetic in a command like below. For easy
-reading of the command, we'll define the shell variable @code{i} for the
-image name and save the output in @file{masked.fits}.
+While DS9 is open, slide the dynamic range (values for black and white, or
minimum/maximum values in different color schemes) and zoom into various
regions of the M51 group to see if you are satisfied with the detected clumps.
+Don't forget that through the ``Cube'' window that is opened along with DS9,
you can flip through the extensions and see the actual clumps also.
+The questions you should be asking your self are these: 1) Which real clumps
(as you visually @emph{feel}) have been missed? In other words, is the
@emph{completeness} good? 2) Are there any clumps which you @emph{feel} are
false? In other words, is the @emph{purity} good?
+
+Note that completeness and purity are not independent of each other, they are
anti-correlated: the higher your purity, the lower your completeness and
vice-versa.
+You can see this by playing with the purity level using the @option{--snquant}
option.
+Run Segment as shown above again with @code{-P} and see its default value.
+Then increase/decrease it for higher/lower purity and check the result as
before.
+You will see that if you want the best purity, you have to sacrifice
completeness and vice versa.
+
+One interesting region to inspect in this image is the many bright peaks
around the central parts of M51.
+Zoom into that region and inspect how many of them have actually been detected
as true clumps.
+Do you have a good balance between completeness and purity? Also look out far
into the wings of the group and inspect the completeness and purity there.
+
+An easier way to inspect completeness (and only completeness) is to mask all
the pixels detected as clumps and visually inspecting the rest of the pixels.
+You can do this using Arithmetic in a command like below.
+For easy reading of the command, we'll define the shell variable @code{i} for
the image name and save the output in @file{masked.fits}.
@example
$ in="r_detected_segmented.fits -hINPUT"
@@ -4998,33 +3812,23 @@ $ clumps="r_detected_segmented.fits -hCLUMPS"
$ astarithmetic $in $clumps 0 gt nan where -oclumps-masked.fits
@end example
-Inspecting @file{clumps-masked.fits}, you can see some very diffuse peaks
-that have been missed, especially as you go farther away from the group
-center and into the diffuse wings. This is due to the fact that with this
-configuration, we have focused more on the sharper clumps. To put the focus
-more on diffuse clumps, you can use a wider convolution kernel. Using a
-larger kernel can also help in detecting the existing clumps to fainter
-levels (thus better separating them from the surrounding diffuse signal).
+Inspecting @file{clumps-masked.fits}, you can see some very diffuse peaks that
have been missed, especially as you go farther away from the group center and
into the diffuse wings.
+This is due to the fact that with this configuration, we have focused more on
the sharper clumps.
+To put the focus more on diffuse clumps, you can use a wider convolution
kernel.
+Using a larger kernel can also help in detecting the existing clumps to
fainter levels (thus better separating them from the surrounding diffuse
signal).
-You can make any kernel easily using the @option{--kernel} option in
-@ref{MakeProfiles}. But note that a larger kernel is also going to wash-out
-many of the sharp/small clumps close to the center of M51 and also some
-smaller peaks on the wings. Please continue playing with Segment's
-configuration to obtain a more complete result (while keeping reasonable
-purity). We'll finish the discussion on finding true clumps at this point.
+You can make any kernel easily using the @option{--kernel} option in
@ref{MakeProfiles}.
+But note that a larger kernel is also going to wash-out many of the
sharp/small clumps close to the center of M51 and also some smaller peaks on
the wings.
+Please continue playing with Segment's configuration to obtain a more complete
result (while keeping reasonable purity).
+We'll finish the discussion on finding true clumps at this point.
-The properties of the clumps within M51, or the background objects can then
-easily be measured using @ref{MakeCatalog}. To measure the properties of
-the background objects (detected as clumps over the diffuse region), you
-shouldn't mask the diffuse region. When measuring clump properties with
-@ref{MakeCatalog} and using the @option{--clumpscat}, the ambient flux
-(from the diffuse region) is calculated and subtracted. If the diffuse
-region is masked, its effect on the clump brightness cannot be calculated
-and subtracted.
+The properties of the clumps within M51, or the background objects can then
easily be measured using @ref{MakeCatalog}.
+To measure the properties of the background objects (detected as clumps over
the diffuse region), you shouldn't mask the diffuse region.
+When measuring clump properties with @ref{MakeCatalog} and using the
@option{--clumpscat}, the ambient flux (from the diffuse region) is calculated
and subtracted.
+If the diffuse region is masked, its effect on the clump brightness cannot be
calculated and subtracted.
-To keep this tutorial short, we'll stop here. See @ref{Segmentation and
-making a catalog} and @ref{Segment} for more on using Segment, producing
-catalogs with MakeCatalog and using those catalogs.
+To keep this tutorial short, we'll stop here.
+See @ref{Segmentation and making a catalog} and @ref{Segment} for more on
using Segment, producing catalogs with MakeCatalog and using those catalogs.
@@ -5042,44 +3846,25 @@ catalogs with MakeCatalog and using those catalogs.
@c were seen to follow this ``Installation'' chapter title in search of the
@c tarball and fast instructions.
@cindex Installation
-The latest released version of Gnuastro source code is always available at
-the following URL:
+The latest released version of Gnuastro source code is always available at the
following URL:
@url{http://ftpmirror.gnu.org/gnuastro/gnuastro-latest.tar.gz}
@noindent
-@ref{Quick start} describes the commands necessary to configure, build, and
-install Gnuastro on your system. This chapter will be useful in cases where
-the simple procedure above is not sufficient, for example your system lacks
-a mandatory/optional dependency (in other words, you can't pass the
-@command{$ ./configure} step), or you want greater customization, or you
-want to build and install Gnuastro from other random points in its history,
-or you want a higher level of control on the installation. Thus if you were
-happy with downloading the tarball and following @ref{Quick start}, then
-you can safely ignore this chapter and come back to it in the future if you
-need more customization.
-
-@ref{Dependencies} describes the mandatory, optional and bootstrapping
-dependencies of Gnuastro. Only the first group are required/mandatory when
-you are building Gnuastro using a tarball (see @ref{Release tarball}), they
-are very basic and low-level tools used in most astronomical software, so
-you might already have them installed, if not they are very easy to install
-as described for each. @ref{Downloading the source} discusses the two
-methods you can obtain the source code: as a tarball (a significant
-snapshot in Gnuastro's history), or the full
-history@footnote{@ref{Bootstrapping dependencies} are required if you clone
-the full history.}. The latter allows you to build Gnuastro at any random
-point in its history (for example to get bug fixes or new features that are
-not released as a tarball yet).
-
-The building and installation of Gnuastro is heavily customizable, to learn
-more about them, see @ref{Build and install}. This section is essentially a
-thorough explanation of the steps in @ref{Quick start}. It discusses ways
-you can influence the building and installation. If you encounter any
-problems in the installation process, it is probably already explained in
-@ref{Known issues}. In @ref{Other useful software} the installation and
-usage of some other free software that are not directly required by
-Gnuastro but might be useful in conjunction with it is discussed.
+@ref{Quick start} describes the commands necessary to configure, build, and
install Gnuastro on your system.
+This chapter will be useful in cases where the simple procedure above is not
sufficient, for example your system lacks a mandatory/optional dependency (in
other words, you can't pass the @command{$ ./configure} step), or you want
greater customization, or you want to build and install Gnuastro from other
random points in its history, or you want a higher level of control on the
installation.
+Thus if you were happy with downloading the tarball and following @ref{Quick
start}, then you can safely ignore this chapter and come back to it in the
future if you need more customization.
+
+@ref{Dependencies} describes the mandatory, optional and bootstrapping
dependencies of Gnuastro.
+Only the first group are required/mandatory when you are building Gnuastro
using a tarball (see @ref{Release tarball}), they are very basic and low-level
tools used in most astronomical software, so you might already have them
installed, if not they are very easy to install as described for each.
+@ref{Downloading the source} discusses the two methods you can obtain the
source code: as a tarball (a significant snapshot in Gnuastro's history), or
the full history@footnote{@ref{Bootstrapping dependencies} are required if you
clone the full history.}.
+The latter allows you to build Gnuastro at any random point in its history
(for example to get bug fixes or new features that are not released as a
tarball yet).
+
+The building and installation of Gnuastro is heavily customizable, to learn
more about them, see @ref{Build and install}.
+This section is essentially a thorough explanation of the steps in @ref{Quick
start}.
+It discusses ways you can influence the building and installation.
+If you encounter any problems in the installation process, it is probably
already explained in @ref{Known issues}.
+In @ref{Other useful software} the installation and usage of some other free
software that are not directly required by Gnuastro but might be useful in
conjunction with it is discussed.
@menu
@@ -5094,25 +3879,16 @@ Gnuastro but might be useful in conjunction with it is
discussed.
@node Dependencies, Downloading the source, Installation, Installation
@section Dependencies
-A minimal set of dependencies are mandatory for building Gnuastro from the
-standard tarball release. If they are not present you cannot pass
-Gnuastro's configuration step. The mandatory dependencies are therefore
-very basic (low-level) tools which are easy to obtain, build and install,
-see @ref{Mandatory dependencies} for a full discussion.
-
-If you have the packages of @ref{Optional dependencies}, Gnuastro will have
-additional functionality (for example converting FITS images to JPEG or
-PDF). If you are installing from a tarball as explained in @ref{Quick
-start}, you can stop reading after this section. If you are cloning the
-version controlled source (see @ref{Version controlled source}), an
-additional bootstrapping step is required before configuration and its
-dependencies are explained in @ref{Bootstrapping dependencies}.
-
-Your operating system's package manager is an easy and convenient way to
-download and install the dependencies that are already pre-built for your
-operating system. In @ref{Dependencies from package managers}, we'll list
-some common operating system package manager commands to install the
-optional and mandatory dependencies.
+A minimal set of dependencies are mandatory for building Gnuastro from the
standard tarball release.
+If they are not present you cannot pass Gnuastro's configuration step.
+The mandatory dependencies are therefore very basic (low-level) tools which
are easy to obtain, build and install, see @ref{Mandatory dependencies} for a
full discussion.
+
+If you have the packages of @ref{Optional dependencies}, Gnuastro will have
additional functionality (for example converting FITS images to JPEG or PDF).
+If you are installing from a tarball as explained in @ref{Quick start}, you
can stop reading after this section.
+If you are cloning the version controlled source (see @ref{Version controlled
source}), an additional bootstrapping step is required before configuration and
its dependencies are explained in @ref{Bootstrapping dependencies}.
+
+Your operating system's package manager is an easy and convenient way to
download and install the dependencies that are already pre-built for your
operating system.
+In @ref{Dependencies from package managers}, we'll list some common operating
system package manager commands to install the optional and mandatory
dependencies.
@menu
* Mandatory dependencies:: Gnuastro will not install without these.
@@ -5126,11 +3902,9 @@ optional and mandatory dependencies.
@cindex Dependencies, Gnuastro
@cindex GNU build system
-The mandatory Gnuastro dependencies are very basic and low-level
-tools. They all follow the same basic GNU based build system (like that
-shown in @ref{Quick start}), so even if you don't have them, installing
-them should be pretty straightforward. In this section we explain each
-program and any specific note that might be necessary in the installation.
+The mandatory Gnuastro dependencies are very basic and low-level tools.
+They all follow the same basic GNU based build system (like that shown in
@ref{Quick start}), so even if you don't have them, installing them should be
pretty straightforward.
+In this section we explain each program and any specific note that might be
necessary in the installation.
@menu
@@ -5143,13 +3917,8 @@ program and any specific note that might be necessary in
the installation.
@subsubsection GNU Scientific library
@cindex GNU Scientific Library
-The @url{http://www.gnu.org/software/gsl/, GNU Scientific Library}, or GSL,
-is a large collection of functions that are very useful in scientific
-applications, for example integration, random number generation, and Fast
-Fourier Transform among many others. To install GSL from source, you can
-run the following commands after you have downloaded
-@url{http://ftpmirror.gnu.org/gsl/gsl-latest.tar.gz,
-@file{gsl-latest.tar.gz}}:
+The @url{http://www.gnu.org/software/gsl/, GNU Scientific Library}, or GSL, is
a large collection of functions that are very useful in scientific
applications, for example integration, random number generation, and Fast
Fourier Transform among many others.
+To install GSL from source, you can run the following commands after you have
downloaded @url{http://ftpmirror.gnu.org/gsl/gsl-latest.tar.gz,
@file{gsl-latest.tar.gz}}:
@example
$ tar xf gsl-latest.tar.gz
@@ -5165,55 +3934,33 @@ $ sudo make install
@cindex CFITSIO
@cindex FITS standard
-@url{http://heasarc.gsfc.nasa.gov/fitsio/, CFITSIO} is the closest you can
-get to the pixels in a FITS image while remaining faithful to the
-@url{http://fits.gsfc.nasa.gov/fits_standard.html, FITS standard}. It is
-written by William Pence, the principal author of the FITS
-standard@footnote{Pence, W.D. et al. Definition of the Flexible Image
-Transport System (FITS), version 3.0. (2010) Astronomy and Astrophysics,
-Volume 524, id.A42, 40 pp.}, and is regularly updated. Setting the
-definitions for all other software packages using FITS images.
+@url{http://heasarc.gsfc.nasa.gov/fitsio/, CFITSIO} is the closest you can get
to the pixels in a FITS image while remaining faithful to the
@url{http://fits.gsfc.nasa.gov/fits_standard.html, FITS standard}.
+It is written by William Pence, the principal author of the FITS
standard@footnote{Pence, W.D. et al. Definition of the Flexible Image Transport
System (FITS), version 3.0. (2010) Astronomy and Astrophysics, Volume 524,
id.A42, 40 pp.}, and is regularly updated.
+Setting the definitions for all other software packages using FITS images.
@vindex --enable-reentrant
@cindex Reentrancy, multiple file opening
@cindex Multiple file opening, reentrancy
-Some GNU/Linux distributions have CFITSIO in their package managers, if it
-is available and updated, you can use it. One problem that might occur is
-that CFITSIO might not be configured with the @option{--enable-reentrant}
-option by the distribution. This option allows CFITSIO to open a file in
-multiple threads, it can thus provide great speed improvements. If CFITSIO
-was not configured with this option, any program which needs this
-capability will warn you and abort when you ask for multiple threads (see
-@ref{Multi-threaded operations}).
-
-To install CFITSIO from source, we strongly recommend that you have a look
-through Chapter 2 (Creating the CFITSIO library) of the CFITSIO manual and
-understand the options you can pass to @command{$ ./configure} (they aren't
-too much). This is a very basic package for most astronomical software and
-it is best that you configure it nicely with your system. Once you download
-the source and unpack it, the following configure script should be enough
-for most purposes. Don't forget to read chapter two of the manual though,
-for example the second option is only for 64bit systems. The manual also
-explains how to check if it has been installed correctly.
-
-CFITSIO comes with two executable files called @command{fpack} and
-@command{funpack}. From their manual: they ``are standalone programs for
-compressing and uncompressing images and tables that are stored in the FITS
-(Flexible Image Transport System) data format. They are analogous to the
-gzip and gunzip compression programs except that they are optimized for the
-types of astronomical images that are often stored in FITS format''. The
-commands below will compile and install them on your system along with
-CFITSIO. They are not essential for Gnuastro, since they are just wrappers
-for functions within CFITSIO, but they can come in handy. The @command{make
-utils} command is only available for versions above 3.39, it will build
-these executable files along with several other executable test files which
-are deleted in the following commands before the installation (otherwise
-the test files will also be installed).
-
-The commands necessary to decompress, build and install CFITSIO from source
-are described below. Let's assume you have downloaded
-@url{http://heasarc.gsfc.nasa.gov/FTP/software/fitsio/c/cfitsio_latest.tar.gz,
-@file{cfitsio_latest.tar.gz}} and are in the same directory:
+Some GNU/Linux distributions have CFITSIO in their package managers, if it is
available and updated, you can use it.
+One problem that might occur is that CFITSIO might not be configured with the
@option{--enable-reentrant} option by the distribution.
+This option allows CFITSIO to open a file in multiple threads, it can thus
provide great speed improvements.
+If CFITSIO was not configured with this option, any program which needs this
capability will warn you and abort when you ask for multiple threads (see
@ref{Multi-threaded operations}).
+
+To install CFITSIO from source, we strongly recommend that you have a look
through Chapter 2 (Creating the CFITSIO library) of the CFITSIO manual and
understand the options you can pass to @command{$ ./configure} (they aren't too
much).
+This is a very basic package for most astronomical software and it is best
that you configure it nicely with your system.
+Once you download the source and unpack it, the following configure script
should be enough for most purposes.
+Don't forget to read chapter two of the manual though, for example the second
option is only for 64bit systems.
+The manual also explains how to check if it has been installed correctly.
+
+CFITSIO comes with two executable files called @command{fpack} and
@command{funpack}.
+From their manual: they ``are standalone programs for compressing and
uncompressing images and tables that are stored in the FITS (Flexible Image
Transport System) data format.
+They are analogous to the gzip and gunzip compression programs except that
they are optimized for the types of astronomical images that are often stored
in FITS format''.
+The commands below will compile and install them on your system along with
CFITSIO.
+They are not essential for Gnuastro, since they are just wrappers for
functions within CFITSIO, but they can come in handy.
+The @command{make utils} command is only available for versions above 3.39, it
will build these executable files along with several other executable test
files which are deleted in the following commands before the installation
(otherwise the test files will also be installed).
+
+The commands necessary to decompress, build and install CFITSIO from source
are described below.
+Let's assume you have downloaded
@url{http://heasarc.gsfc.nasa.gov/FTP/software/fitsio/c/cfitsio_latest.tar.gz,
@file{cfitsio_latest.tar.gz}} and are in the same directory:
@example
$ tar xf cfitsio_latest.tar.gz
@@ -5238,40 +3985,23 @@ $ sudo make install
@cindex WCS
@cindex WCSLIB
@cindex World Coordinate System
-@url{http://www.atnf.csiro.au/people/mcalabre/WCS/, WCSLIB} is written and
-maintained by one of the authors of the World Coordinate System (WCS)
-definition in the @url{http://fits.gsfc.nasa.gov/fits_standard.html, FITS
-standard}@footnote{Greisen E.W., Calabretta M.R. (2002) Representation of
-world coordinates in FITS. Astronomy and Astrophysics, 395, 1061-1075.},
-Mark Calabretta. It might be already built and ready in your distribution's
-package management system. However, here the installation from source is
-explained, for the advantages of installation from source please see
-@ref{Mandatory dependencies}. To install WCSLIB you will need to have
-CFITSIO already installed, see @ref{CFITSIO}.
+@url{http://www.atnf.csiro.au/people/mcalabre/WCS/, WCSLIB} is written and
maintained by one of the authors of the World Coordinate System (WCS)
definition in the @url{http://fits.gsfc.nasa.gov/fits_standard.html, FITS
standard}@footnote{Greisen E.W., Calabretta M.R. (2002) Representation of world
coordinates in FITS.
+Astronomy and Astrophysics, 395, 1061-1075.}, Mark Calabretta.
+It might be already built and ready in your distribution's package management
system.
+However, here the installation from source is explained, for the advantages of
installation from source please see @ref{Mandatory dependencies}.
+To install WCSLIB you will need to have CFITSIO already installed, see
@ref{CFITSIO}.
@vindex --without-pgplot
-WCSLIB also has plotting capabilities which use PGPLOT (a plotting library
-for C). If you wan to use those capabilities in WCSLIB, @ref{PGPLOT}
-provides the PGPLOT installation instructions. However PGPLOT is
-old@footnote{As of early June 2016, its most recent version was uploaded in
-February 2001.}, so its installation is not easy, there are also many great
-modern WCS plotting tools (mostly in written in Python). Hence, if you will
-not be using those plotting functions in WCSLIB, you can configure it with
-the @option{--without-pgplot} option as shown below.
-
-If you have the cURL library @footnote{@url{https://curl.haxx.se}} on your
-system and you installed CFITSIO version 3.42 or later, you will need to
-also link with the cURL library at configure time (through the
-@code{-lcurl} option as shown below). CFITSIO uses the cURL library for its
-HTTPS (or HTTP Secure@footnote{@url{https://en.wikipedia.org/wiki/HTTPS}})
-support and if it is present on your system, CFITSIO will depend on
-it. Therefore, if @command{./configure} command below fails (you don't have
-the cURL library), then remove this option and rerun it.
-
-Let's assume you have downloaded
-@url{ftp://ftp.atnf.csiro.au/pub/software/wcslib/wcslib.tar.bz2,
-@file{wcslib.tar.bz2}} and are in the same directory, to configure, build,
-check and install WCSLIB follow the steps below.
+WCSLIB also has plotting capabilities which use PGPLOT (a plotting library for
C).
+If you wan to use those capabilities in WCSLIB, @ref{PGPLOT} provides the
PGPLOT installation instructions.
+However PGPLOT is old@footnote{As of early June 2016, its most recent version
was uploaded in February 2001.}, so its installation is not easy, there are
also many great modern WCS plotting tools (mostly in written in Python).
+Hence, if you will not be using those plotting functions in WCSLIB, you can
configure it with the @option{--without-pgplot} option as shown below.
+
+If you have the cURL library @footnote{@url{https://curl.haxx.se}} on your
system and you installed CFITSIO version 3.42 or later, you will need to also
link with the cURL library at configure time (through the @code{-lcurl} option
as shown below).
+CFITSIO uses the cURL library for its HTTPS (or HTTP
Secure@footnote{@url{https://en.wikipedia.org/wiki/HTTPS}}) support and if it
is present on your system, CFITSIO will depend on it.
+Therefore, if @command{./configure} command below fails (you don't have the
cURL library), then remove this option and rerun it.
+
+Let's assume you have downloaded
@url{ftp://ftp.atnf.csiro.au/pub/software/wcslib/wcslib.tar.bz2,
@file{wcslib.tar.bz2}} and are in the same directory, to configure, build,
check and install WCSLIB follow the steps below.
@example
$ tar xf wcslib.tar.bz2
@@ -5291,102 +4021,68 @@ $ sudo make install
@node Optional dependencies, Bootstrapping dependencies, Mandatory
dependencies, Dependencies
@subsection Optional dependencies
-The libraries listed here are only used for very specific applications,
-therefore if you don't want these operations, Gnuastro will be built and
-installed without them and you don't have to have the dependencies.
+The libraries listed here are only used for very specific applications,
therefore if you don't want these operations, Gnuastro will be built and
installed without them and you don't have to have the dependencies.
@cindex GPL Ghostscript
-If the @command{./configure} script can't find these requirements, it will
-warn you in the end that they are not present and notify you of the
-operation(s) you can't do due to not having them. If the output you request
-from a program requires a missing library, that program is going to warn
-you and abort. In the case of program dependencies (like GPL GhostScript),
-if you install them at a later time, the program will run. This is because
-if required libraries are not present at build time, the executables cannot
-be built, but an executable is called by the built program at run time so
-if it becomes available, it will be used. If you do install an optional
-library later, you will have to rebuild Gnuastro and reinstall it for it to
-take effect.
+If the @command{./configure} script can't find these requirements, it will
warn you in the end that they are not present and notify you of the
operation(s) you can't do due to not having them.
+If the output you request from a program requires a missing library, that
program is going to warn you and abort.
+In the case of program dependencies (like GPL GhostScript), if you install
them at a later time, the program will run.
+This is because if required libraries are not present at build time, the
executables cannot be built, but an executable is called by the built program
at run time so if it becomes available, it will be used.
+If you do install an optional library later, you will have to rebuild Gnuastro
and reinstall it for it to take effect.
@table @asis
@item GNU Libtool
@cindex GNU Libtool
-Libtool is a program to simplify managing of the libraries to build an
-executable (a program). GNU Libtool has some added functionality compared
-to other implementations. If GNU Libtool isn't present on your system at
-configuration time, a warning will be printed and @ref{BuildProgram} won't
-be built or installed. The configure script will look into your search path
-(@code{PATH}) for GNU Libtool through the following executable names:
-@command{libtool} (acceptable only if it is the GNU implementation) or
-@command{glibtool}. See @ref{Installation directory} for more on
-@code{PATH}.
-
-GNU Libtool (the binary/executable file) is a low-level program that is
-probably already present on your system, and if not, is available in your
-operating system package manager@footnote{Note that we want the
-binary/executable Libtool program which can be run on the command-line. In
-Debian-based operating systems which separate various parts of a package,
-you want want @code{libtool-bin}, the @code{libtool} package won't contain
-the executable program.}. If you want to install GNU Libtool's latest
-version from source, please visit its
-@url{https://www.gnu.org/software/libtool/, webpage}.
-
-Gnuastro's tarball is shipped with an internal implementation of GNU
-Libtool. Even if you have GNU Libtool, Gnuastro's internal implementation
-is used for the building and installation of Gnuastro. As a result, you can
-still build, install and use Gnuastro even if you don't have GNU Libtool
-installed on your system. However, this internal Libtool does not get
-installed. Therefore, after Gnuastro's installation, if you want to use
-@ref{BuildProgram} to compile and link your own C source code which uses
-the @ref{Gnuastro library}, you need to have GNU Libtool available on your
-system (independent of Gnuastro). See @ref{Review of library fundamentals}
-to learn more about libraries.
+Libtool is a program to simplify managing of the libraries to build an
executable (a program).
+GNU Libtool has some added functionality compared to other implementations.
+If GNU Libtool isn't present on your system at configuration time, a warning
will be printed and @ref{BuildProgram} won't be built or installed.
+The configure script will look into your search path (@code{PATH}) for GNU
Libtool through the following executable names: @command{libtool} (acceptable
only if it is the GNU implementation) or @command{glibtool}.
+See @ref{Installation directory} for more on @code{PATH}.
+
+GNU Libtool (the binary/executable file) is a low-level program that is
probably already present on your system, and if not, is available in your
operating system package manager@footnote{Note that we want the
binary/executable Libtool program which can be run on the command-line.
+In Debian-based operating systems which separate various parts of a package,
you want want @code{libtool-bin}, the @code{libtool} package won't contain the
executable program.}.
+If you want to install GNU Libtool's latest version from source, please visit
its @url{https://www.gnu.org/software/libtool/, webpage}.
+
+Gnuastro's tarball is shipped with an internal implementation of GNU Libtool.
+Even if you have GNU Libtool, Gnuastro's internal implementation is used for
the building and installation of Gnuastro.
+As a result, you can still build, install and use Gnuastro even if you don't
have GNU Libtool installed on your system.
+However, this internal Libtool does not get installed.
+Therefore, after Gnuastro's installation, if you want to use
@ref{BuildProgram} to compile and link your own C source code which uses the
@ref{Gnuastro library}, you need to have GNU Libtool available on your system
(independent of Gnuastro).
+See @ref{Review of library fundamentals} to learn more about libraries.
@item libgit2
@cindex Git
-@cindex libgit2
+@pindex libgit2
@cindex Version control systems
-Git is one of the most common version control systems (see @ref{Version
-controlled source}). When @file{libgit2} is present, and Gnuastro's
-programs are run within a version controlled directory, outputs will
-contain the version number of the working directory's repository for future
-reproducibility. See the @command{COMMIT} keyword header in @ref{Output
-FITS files} for a discussion.
+Git is one of the most common version control systems (see @ref{Version
controlled source}).
+When @file{libgit2} is present, and Gnuastro's programs are run within a
version controlled directory, outputs will contain the version number of the
working directory's repository for future reproducibility.
+See the @command{COMMIT} keyword header in @ref{Output FITS files} for a
discussion.
@item libjpeg
@pindex libjpeg
@cindex JPEG format
-libjpeg is only used by ConvertType to read from and write to JPEG images,
-see @ref{Recognized file formats}. @url{http://www.ijg.org/, libjpeg} is a
-very basic library that provides tools to read and write JPEG images, most
-Unix-like graphic programs and libraries use it. Therefore you most
-probably already have it installed.
-@url{http://libjpeg-turbo.virtualgl.org/, libjpeg-turbo} is an alternative
-to libjpeg. It uses Single instruction, multiple data (SIMD) instructions
-for ARM based systems that significantly decreases the processing time of
-JPEG compression and decompression algorithms.
+libjpeg is only used by ConvertType to read from and write to JPEG images, see
@ref{Recognized file formats}.
+@url{http://www.ijg.org/, libjpeg} is a very basic library that provides tools
to read and write JPEG images, most Unix-like graphic programs and libraries
use it.
+Therefore you most probably already have it installed.
+@url{http://libjpeg-turbo.virtualgl.org/, libjpeg-turbo} is an alternative to
libjpeg.
+It uses Single instruction, multiple data (SIMD) instructions for ARM based
systems that significantly decreases the processing time of JPEG compression
and decompression algorithms.
@item libtiff
@pindex libtiff
@cindex TIFF format
-libtiff is used by ConvertType and the libraries to read TIFF images, see
-@ref{Recognized file formats}. @url{http://www.simplesystems.org/libtiff/,
-libtiff} is a very basic library that provides tools to read and write TIFF
-images, most Unix-like operating system graphic programs and libraries use
-it. Therefore even if you don't have it installed, it must be easily
-available in your package manager.
+libtiff is used by ConvertType and the libraries to read TIFF images, see
@ref{Recognized file formats}.
+@url{http://www.simplesystems.org/libtiff/, libtiff} is a very basic library
that provides tools to read and write TIFF images, most Unix-like operating
system graphic programs and libraries use it.
+Therefore even if you don't have it installed, it must be easily available in
your package manager.
@item GPL Ghostscript
@cindex GPL Ghostscript
-GPL Ghostscript's executable (@command{gs}) is called by ConvertType to
-compile a PDF file from a source PostScript file, see
-@ref{ConvertType}. Therefore its headers (and libraries) are not
-needed. With a very high probability you already have it in your GNU/Linux
-distribution. Unfortunately it does not follow the standard GNU build style
-so installing it is very hard. It is best to rely on your distribution's
-package managers for this.
+GPL Ghostscript's executable (@command{gs}) is called by ConvertType to
compile a PDF file from a source PostScript file, see @ref{ConvertType}.
+Therefore its headers (and libraries) are not needed.
+With a very high probability you already have it in your GNU/Linux
distribution.
+Unfortunately it does not follow the standard GNU build style so installing it
is very hard.
+It is best to rely on your distribution's package managers for this.
@end table
@@ -5396,26 +4092,15 @@ package managers for this.
@node Bootstrapping dependencies, Dependencies from package managers, Optional
dependencies, Dependencies
@subsection Bootstrapping dependencies
-Bootstrapping is only necessary if you have decided to obtain the full
-version controlled history of Gnuastro, see @ref{Version controlled source}
-and @ref{Bootstrapping}. Using the version controlled source enables you to
-always be up to date with the most recent development work of Gnuastro (bug
-fixes, new functionalities, improved algorithms and etc). If you have
-downloaded a tarball (see @ref{Downloading the source}), then you can
-ignore this subsection.
-
-To successfully run the bootstrapping process, there are some additional
-dependencies to those discussed in the previous subsections. These are low
-level tools that are used by a large collection of Unix-like operating
-systems programs, therefore they are most probably already available in
-your system. If they are not already installed, you should be able to
-easily find them in any GNU/Linux distribution package management system
-(@command{apt-get}, @command{yum}, @command{pacman} and etc). The short
-names in parenthesis in @command{typewriter} font after the package name
-can be used to search for them in your package manager. For the GNU
-Portability Library, GNU Autoconf Archive and @TeX{} Live, it is
-recommended to use the instructions here, not your operating system's
-package manager.
+Bootstrapping is only necessary if you have decided to obtain the full version
controlled history of Gnuastro, see @ref{Version controlled source} and
@ref{Bootstrapping}.
+Using the version controlled source enables you to always be up to date with
the most recent development work of Gnuastro (bug fixes, new functionalities,
improved algorithms, etc).
+If you have downloaded a tarball (see @ref{Downloading the source}), then you
can ignore this subsection.
+
+To successfully run the bootstrapping process, there are some additional
dependencies to those discussed in the previous subsections.
+These are low level tools that are used by a large collection of Unix-like
operating systems programs, therefore they are most probably already available
in your system.
+If they are not already installed, you should be able to easily find them in
any GNU/Linux distribution package management system (@command{apt-get},
@command{yum}, @command{pacman}, etc).
+The short names in parenthesis in @command{typewriter} font after the package
name can be used to search for them in your package manager.
+For the GNU Portability Library, GNU Autoconf Archive and @TeX{} Live, it is
recommended to use the instructions here, not your operating system's package
manager.
@table @asis
@@ -5423,34 +4108,17 @@ package manager.
@cindex GNU C library
@cindex Gnulib: GNU Portability Library
@cindex GNU Portability Library (Gnulib)
-To ensure portability for a wider range of operating systems (those that
-don't include GNU C library, namely glibc), Gnuastro depends on the GNU
-portability library, or Gnulib. Gnulib keeps a copy of all the functions in
-glibc, implemented (as much as possible) to be portable to other operating
-systems. The @file{bootstrap} script can automatically clone Gnulib (as a
-@file{gnulib/} directory inside Gnuastro), however, as described in
-@ref{Bootstrapping} this is not recommended.
-
-The recommended way to bootstrap Gnuastro is to first clone Gnulib and the
-Autoconf archives (see below) into a local directory outside of
-Gnuastro. Let's call it @file{DEVDIR}@footnote{If you are not a developer
-in Gnulib or Autoconf archives, @file{DEVDIR} can be a directory that you
-don't backup. In this way the large number of files in these projects won't
-slow down your backup process or take bandwidth (if you backup to a remote
-server).} (which you can set to any directory). Currently in Gnuastro,
-both Gnulib and Autoconf archives have to be cloned in the same top
-directory@footnote{If you already have the Autoconf archives in a separate
-directory, or can't clone it in the same directory as Gnulib, or you have
-it with another directory name (not @file{autoconf-archive/}), you can
-follow this short step. Set @file{AUTOCONFARCHIVES} to your desired
-address. Then define a symbolic link in @file{DEVDIR} with the following
-command so Gnuastro's bootstrap script can find it:@*@command{$ ln -s
-$AUTOCONFARCHIVES $DEVDIR/autoconf-archive}.} like the case
-here@footnote{If your internet connection is active, but Git complains
-about the network, it might be due to your network setup not recognizing
-the git protocol. In that case use the following URL for the HTTP protocol
-instead (for Autoconf archives, replace the name):
-@command{http://git.sv.gnu.org/r/gnulib.git}}:
+To ensure portability for a wider range of operating systems (those that don't
include GNU C library, namely glibc), Gnuastro depends on the GNU portability
library, or Gnulib.
+Gnulib keeps a copy of all the functions in glibc, implemented (as much as
possible) to be portable to other operating systems.
+The @file{bootstrap} script can automatically clone Gnulib (as a
@file{gnulib/} directory inside Gnuastro), however, as described in
@ref{Bootstrapping} this is not recommended.
+
+The recommended way to bootstrap Gnuastro is to first clone Gnulib and the
Autoconf archives (see below) into a local directory outside of Gnuastro.
+Let's call it @file{DEVDIR}@footnote{If you are not a developer in Gnulib or
Autoconf archives, @file{DEVDIR} can be a directory that you don't backup.
+In this way the large number of files in these projects won't slow down your
backup process or take bandwidth (if you backup to a remote server).} (which
you can set to any directory).
+Currently in Gnuastro, both Gnulib and Autoconf archives have to be cloned in
the same top directory@footnote{If you already have the Autoconf archives in a
separate directory, or can't clone it in the same directory as Gnulib, or you
have it with another directory name (not @file{autoconf-archive/}), you can
follow this short step.
+Set @file{AUTOCONFARCHIVES} to your desired address.
+Then define a symbolic link in @file{DEVDIR} with the following command so
Gnuastro's bootstrap script can find it:@*@command{$ ln -s $AUTOCONFARCHIVES
$DEVDIR/autoconf-archive}.} like the case here@footnote{If your internet
connection is active, but Git complains about the network, it might be due to
your network setup not recognizing the git protocol.
+In that case use the following URL for the HTTP protocol instead (for Autoconf
archives, replace the name): @command{http://git.sv.gnu.org/r/gnulib.git}}:
@example
$ DEVDIR=/home/yourname/Development
@@ -5460,35 +4128,28 @@ $ git clone git://git.sv.gnu.org/autoconf-archive.git
@end example
@noindent
-You now have the full version controlled source of these two repositories
-in separate directories. Both these packages are regularly updated, so
-every once in a while, you can run @command{$ git pull} within them to get
-any possible updates.
+You now have the full version controlled source of these two repositories in
separate directories.
+Both these packages are regularly updated, so every once in a while, you can
run @command{$ git pull} within them to get any possible updates.
@item GNU Automake (@command{automake})
@cindex GNU Automake
-GNU Automake will build the @file{Makefile.in} files in each sub-directory
-using the (hand-written) @file{Makefile.am} files. The @file{Makefile.in}s
-are subsequently used to generate the @file{Makefile}s when the user runs
-@command{./configure} before building.
+GNU Automake will build the @file{Makefile.in} files in each sub-directory
using the (hand-written) @file{Makefile.am} files.
+The @file{Makefile.in}s are subsequently used to generate the @file{Makefile}s
when the user runs @command{./configure} before building.
@item GNU Autoconf (@command{autoconf})
@cindex GNU Autoconf
-GNU Autoconf will build the @file{configure} script using the
-configurations we have defined (hand-written) in @file{configure.ac}.
+GNU Autoconf will build the @file{configure} script using the configurations
we have defined (hand-written) in @file{configure.ac}.
@item GNU Autoconf Archive
@cindex GNU Autoconf Archive
-These are a large collection of tests that can be called to run at
-@command{./configure} time. See the explanation under GNU Portability
-Library above for instructions on obtaining it and keeping it up to date.
+These are a large collection of tests that can be called to run at
@command{./configure} time.
+See the explanation under GNU Portability Library above for instructions on
obtaining it and keeping it up to date.
@item GNU Libtool (@command{libtool})
@cindex GNU Libtool
-GNU Libtool is in charge of building all the libraries in Gnuastro. The
-libraries contain functions that are used by more than one program and are
-installed for use in other programs. They are thus put in a separate
-directory (@file{lib/}).
+GNU Libtool is in charge of building all the libraries in Gnuastro.
+The libraries contain functions that are used by more than one program and are
installed for use in other programs.
+They are thus put in a separate directory (@file{lib/}).
@item GNU help2man (@command{help2man})
@cindex GNU help2man
@@ -5498,36 +4159,20 @@ GNU help2man is used to convert the output of the
@option{--help} option
@item @LaTeX{} and some @TeX{} packages
@cindex @LaTeX{}
@cindex @TeX{} Live
-Some of the figures in this book are built by @LaTeX{} (using the PGF/TikZ
-package). The @LaTeX{} source for those figures is version controlled for
-easy maintenance not the actual figures. So the @file{./boostrap} script
-will run @LaTeX{} to build the figures. The best way to install @LaTeX{}
-and all the necessary packages is through
-@url{https://www.tug.org/texlive/, @TeX{} live} which is a package manager
-for @TeX{} related tools that is independent of any operating system. It is
-thus preferred to the @TeX{} Live versions distributed by your operating
-system.
-
-To install @TeX{} Live, go to the webpage and download the appropriate
-installer by following the ``download'' link. Note that by default the full
-package repository will be downloaded and installed (around 4 Giga Bytes)
-which can take @emph{very} long to download and to update later. However,
-most packages are not needed by everyone, it is easier, faster and better
-to install only the ``Basic scheme'' (consisting of only the most basic
-@TeX{} and @LaTeX{} packages, which is less than 200 Mega
-bytes)@footnote{You can also download the DVD iso file at a later time to
-keep as a backup for when you don't have internet connection if you need a
-package.}.
-
-After the installation, be sure to set the environment variables as
-suggested in the end of the outputs. Any time you confront (need) a package
-you don't have, simply install it with a command like below (similar to how
-you install software from your operating system's package
-manager)@footnote{After running @TeX{}, or @LaTeX{}, you might get a
-warning complaining about a @file{missingfile}. Run `@command{tlmgr info
-missingfile}' to see the package(s) containing that file which you can
-install.}. To install all the necessary @TeX{} packages for a successful
-Gnuastro bootstrap, run this command:
+Some of the figures in this book are built by @LaTeX{} (using the PGF/TikZ
package).
+The @LaTeX{} source for those figures is version controlled for easy
maintenance not the actual figures.
+So the @file{./boostrap} script will run @LaTeX{} to build the figures.
+The best way to install @LaTeX{} and all the necessary packages is through
@url{https://www.tug.org/texlive/, @TeX{} live} which is a package manager for
@TeX{} related tools that is independent of any operating system.
+It is thus preferred to the @TeX{} Live versions distributed by your operating
system.
+
+To install @TeX{} Live, go to the webpage and download the appropriate
installer by following the ``download'' link.
+Note that by default the full package repository will be downloaded and
installed (around 4 Giga Bytes) which can take @emph{very} long to download and
to update later.
+However, most packages are not needed by everyone, it is easier, faster and
better to install only the ``Basic scheme'' (consisting of only the most basic
@TeX{} and @LaTeX{} packages, which is less than 200 Mega bytes)@footnote{You
can also download the DVD iso file at a later time to keep as a backup for when
you don't have internet connection if you need a package.}.
+
+After the installation, be sure to set the environment variables as suggested
in the end of the outputs.
+Any time you confront (need) a package you don't have, simply install it with
a command like below (similar to how you install software from your operating
system's package manager)@footnote{After running @TeX{}, or @LaTeX{}, you might
get a warning complaining about a @file{missingfile}.
+Run `@command{tlmgr info missingfile}' to see the package(s) containing that
file which you can install.}.
+To install all the necessary @TeX{} packages for a successful Gnuastro
bootstrap, run this command:
@example
$ su
@@ -5538,9 +4183,8 @@ $ su
@item ImageMagick (@command{imagemagick})
@cindex ImageMagick
-ImageMagick is a wonderful and robust program for image manipulation on the
-command-line. @file{bootsrap} uses it to convert the book images into the
-formats necessary for the various book formats.
+ImageMagick is a wonderful and robust program for image manipulation on the
command-line.
+@file{bootstrap} uses it to convert the book images into the formats necessary
for the various book formats.
@end table
@@ -5555,13 +4199,11 @@ formats necessary for the various book formats.
@cindex Compiling from source
@cindex Source code compilation
@cindex Distributions, GNU/Linux
-The most basic way to install a package on your system is to build the
-packages from source yourself. Alternatively, you can use your operating
-system's package manager to download pre-compiled files and install
-them. The latter choice is easier and faster. However, we recommend that
-you build the @ref{Mandatory dependencies} yourself from source (all
-necessary commands and links are given in the respective section). Here are
-some basic reasons behind this recommendation.
+The most basic way to install a package on your system is to build the
packages from source yourself.
+Alternatively, you can use your operating system's package manager to download
pre-compiled files and install them.
+The latter choice is easier and faster.
+However, we recommend that you build the @ref{Mandatory dependencies} yourself
from source (all necessary commands and links are given in the respective
section).
+Here are some basic reasons behind this recommendation.
@enumerate
@@ -5570,46 +4212,32 @@ Your distribution's pre-built package might not be the
most recent
release.
@item
-For each package, Gnuastro might preform better (or require) certain
-configuration options that your distribution's package managers didn't add
-for you. If present, these configuration options are explained during the
-installation of each in the sections below (for example in
-@ref{CFITSIO}). When the proper configuration has not been set, the
-programs should complain and inform you.
+For each package, Gnuastro might preform better (or require) certain
configuration options that your distribution's package managers didn't add for
you.
+If present, these configuration options are explained during the installation
of each in the sections below (for example in @ref{CFITSIO}).
+When the proper configuration has not been set, the programs should complain
and inform you.
@item
-For the libraries, they might separate the binary file from the header
-files which can cause confusion, see @ref{Known issues}.
+For the libraries, they might separate the binary file from the header files
which can cause confusion, see @ref{Known issues}.
@item
-Like any other tool, the science you derive from Gnuastro's tools highly
-depend on these lower level dependencies, so generally it is much better to
-have a close connection with them. By reading their manuals, installing
-them and staying up to date with changes/bugs in them, your scientific
-results and understanding (of what is going on, and thus how you interpret
-your scientific results) will also correspondingly improve.
+Like any other tool, the science you derive from Gnuastro's tools highly
depend on these lower level dependencies, so generally it is much better to
have a close connection with them.
+By reading their manuals, installing them and staying up to date with
changes/bugs in them, your scientific results and understanding (of what is
going on, and thus how you interpret your scientific results) will also
correspondingly improve.
@end enumerate
-Based on your package manager, you can use any of the following commands to
-install the mandatory and optional dependencies. If your package manager
-isn't included in the list below, please send us the respective command, so
-we add it. Gnuastro itself if also already packaged in some package
-managers (for example Debian or Homebrew).
+Based on your package manager, you can use any of the following commands to
install the mandatory and optional dependencies.
+If your package manager isn't included in the list below, please send us the
respective command, so we add it.
+Gnuastro itself if also already packaged in some package managers (for example
Debian or Homebrew).
-As discussed above, we recommend installing the @emph{mandatory}
-dependencies manually from source (see @ref{Mandatory
-dependencies}). Therefore, in each command below, first the optional
-dependencies are given. The mandatory dependencies are included after an
-empty line. If you would also like to install the mandatory dependencies
-with your package manager, just ignore the empty line.
+As discussed above, we recommend installing the @emph{mandatory} dependencies
manually from source (see @ref{Mandatory dependencies}).
+Therefore, in each command below, first the optional dependencies are given.
+The mandatory dependencies are included after an empty line.
+If you would also like to install the mandatory dependencies with your package
manager, just ignore the empty line.
-For better archivability and compression ratios, Gnuastro's recommended
-tarball compression format is with the
-@url{http://lzip.nongnu.org/lzip.html, Lzip} program, see @ref{Release
-tarball}. Therefore, the package manager commands below also contain Lzip.
+For better archivability and compression ratios, Gnuastro's recommended
tarball compression format is with the @url{http://lzip.nongnu.org/lzip.html,
Lzip} program, see @ref{Release tarball}.
+Therefore, the package manager commands below also contain Lzip.
@table @asis
-@item @command{apt-get} (Debian-based OSs: Debian, Ubuntu, Linux Mint, and etc)
+@item @command{apt-get} (Debian-based OSs: Debian, Ubuntu, Linux Mint, etc)
@cindex Debian
@cindex Ubuntu
@cindex Linux Mint
@@ -5617,14 +4245,12 @@ tarball}. Therefore, the package manager commands below
also contain Lzip.
@cindex Advanced Packaging Tool (APT, Debian)
@url{https://en.wikipedia.org/wiki/Debian,Debian} is one of the oldest
GNU/Linux
-distributions@footnote{@url{https://en.wikipedia.org/wiki/List_of_Linux_distributions#Debian-based}}.
It
-thus has a very extended user community and a robust internal structure and
-standards. All of it is free software and based on the work of volunteers
-around the world. Many distributions are thus derived from it, for example
-Ubuntu and Linux Mint. This arguably makes Debian-based OSs the largest,
-and most used, class of GNU/Linux distributions. All of them use Debian's
-Advanced Packaging Tool (APT, for example @command{apt-get}) for managing
-packages.
+distributions@footnote{@url{https://en.wikipedia.org/wiki/List_of_Linux_distributions#Debian-based}}.
+It thus has a very extended user community and a robust internal structure and
standards.
+All of it is free software and based on the work of volunteers around the
world.
+Many distributions are thus derived from it, for example Ubuntu and Linux Mint.
+This arguably makes Debian-based OSs the largest, and most used, class of
GNU/Linux distributions.
+All of them use Debian's Advanced Packaging Tool (APT, for example
@command{apt-get}) for managing packages.
@example
$ sudo apt-get install ghostscript libtool-bin libjpeg-dev \
libtiff-dev libgit2-dev lzip \
@@ -5633,13 +4259,11 @@ $ sudo apt-get install ghostscript libtool-bin
libjpeg-dev \
@end example
@noindent
-Gnuastro is @url{https://tracker.debian.org/pkg/gnuastro,packaged} in
-Debian (and thus some of its derivate operating systems). Just make sure it
-is the most recent version.
-
+Gnuastro is @url{https://tracker.debian.org/pkg/gnuastro,packaged} in Debian
(and thus some of its derivate operating systems).
+Just make sure it is the most recent version.
@item @command{dnf}
-@itemx @command{yum} (Red Hat-based OSs: Red Hat, Fedora, CentOS, Scientific
Linux, and etc)
+@itemx @command{yum} (Red Hat-based OSs: Red Hat, Fedora, CentOS, Scientific
Linux, etc)
@cindex RHEL
@cindex Fedora
@cindex CentOS
@@ -5647,14 +4271,11 @@ is the most recent version.
@cindex @command{dnf}
@cindex @command{yum}
@cindex Scientific Linux
-@url{https://en.wikipedia.org/wiki/Red_Hat,Red Hat Enterprise Linux} (RHEL)
-is released by Red Hat Inc. RHEL requires paid subscriptions for use of its
-binaries and support. But since it is free software, many other teams use
-its code to spin-off their own distributions based on RHEL. Red Hat-based
-GNU/Linux distributions initially used the ``Yellowdog Updated, Modifier''
-(YUM) package manager, which has been replaced by ``Dandified yum''
-(DNF). If the latter isn't available on your system, you can use
-@command{yum} instead of @command{dnf} in the command below.
+@url{https://en.wikipedia.org/wiki/Red_Hat,Red Hat Enterprise Linux} (RHEL) is
released by Red Hat Inc.
+RHEL requires paid subscriptions for use of its binaries and support.
+But since it is free software, many other teams use its code to spin-off their
own distributions based on RHEL.
+Red Hat-based GNU/Linux distributions initially used the ``Yellowdog Updated,
Modifier'' (YUM) package manager, which has been replaced by ``Dandified yum''
(DNF).
+If the latter isn't available on your system, you can use @command{yum}
instead of @command{dnf} in the command below.
@example
$ sudo dnf install ghostscript libtool libjpeg-devel \
libtiff-devel libgit2-devel lzip \
@@ -5667,18 +4288,14 @@ $ sudo dnf install ghostscript libtool libjpeg-devel
\
@cindex Homebrew
@cindex MacPorts
@cindex @command{brew}
-@url{https://en.wikipedia.org/wiki/MacOS,macOS} is the operating system
-used on Apple devices. macOS does not come with a package manager
-pre-installed, but several widely used, third-party package managers exist,
-such as Homebrew or MacPorts. Both are free software. Currently we have
-only tested Gnuastro's installation with Homebrew as described below.
-
-If not already installed, first obtain Homebrew by following the
-instructions at @url{https://brew.sh}. Homebrew manages packages in
-different `taps'. To install WCSLIB (discussed in @ref{Mandatory
-dependencies}) via Homebrew you will need to @command{tap} into
-@command{brewsci/science} first (the tap may change in the future, but can
-be found by calling @command{brew search wcslib}).
+@url{https://en.wikipedia.org/wiki/MacOS,macOS} is the operating system used
on Apple devices.
+macOS does not come with a package manager pre-installed, but several widely
used, third-party package managers exist, such as Homebrew or MacPorts.
+Both are free software.
+Currently we have only tested Gnuastro's installation with Homebrew as
described below.
+
+If not already installed, first obtain Homebrew by following the instructions
at @url{https://brew.sh}.
+Homebrew manages packages in different `taps'.
+To install WCSLIB (discussed in @ref{Mandatory dependencies}) via Homebrew you
will need to @command{tap} into @command{brewsci/science} first (the tap may
change in the future, but can be found by calling @command{brew search wcslib}).
@example
$ brew install ghostscript libtool libjpeg libtiff \
libgit2 lzip \
@@ -5691,12 +4308,9 @@ $ brew install wcslib
@item @command{pacman} (Arch Linux)
@cindex Arch Linux
@cindex @command{pacman}
-@url{https://en.wikipedia.org/wiki/Arch_Linux,Arch Linux} is a smaller
-GNU/Linux distribution, which follows the KISS principle (``keep it simple,
-stupid'') as a general guideline. It ``focuses on elegance, code
-correctness, minimalism and simplicity, and expects the user to be willing
-to make some effort to understand the system's operation''. Arch Linux uses
-``Package manager'' (Pacman) to manage its packages/components.
+@url{https://en.wikipedia.org/wiki/Arch_Linux,Arch Linux} is a smaller
GNU/Linux distribution, which follows the KISS principle (``keep it simple,
stupid'') as a general guideline.
+It ``focuses on elegance, code correctness, minimalism and simplicity, and
expects the user to be willing to make some effort to understand the system's
operation''.
+Arch Linux uses ``Package manager'' (Pacman) to manage its packages/components.
@example
$ sudo pacman -S ghostscript libtool libjpeg libtiff \
libgit2 lzip \
@@ -5707,15 +4321,10 @@ $ sudo pacman -S ghostscript libtool libjpeg libtiff
\
@item @command{zypper} (openSUSE and SUSE Linux Enterprise Server)
@cindex openSUSE
@cindex SUSE Linux Enterprise Server
-@cindex @command{zypper}
-@url{https://www.opensuse.org,openSUSE} is a community project supported by
-@url{https://www.suse.com,SUSE} with both stable and rolling releases.
-SUSE Linux Enterprise
-Server@footnote{@url{https://www.suse.com/products/server}} (SLES) is the
-commercial offering which shares code and tools. Many additional packages
-are offered in the Build
-Service@footnote{@url{https://build.opensuse.org}}. openSUSE and SLES use
-@command{zypper} (cli) and YaST (GUI) for managing repositories and
+@cindex @command{zypper} @url{https://www.opensuse.org,openSUSE} is a
community project supported by @url{https://www.suse.com,SUSE} with both stable
and rolling releases.
+SUSE Linux Enterprise
Server@footnote{@url{https://www.suse.com/products/server}} (SLES) is the
commercial offering which shares code and tools.
+Many additional packages are offered in the Build
Service@footnote{@url{https://build.opensuse.org}}.
+openSUSE and SLES use @command{zypper} (cli) and YaST (GUI) for managing
repositories and
packages.
@example
@@ -5725,8 +4334,7 @@ $ sudo zypper install ghostscript_any libtool pkgconfig
\
wcslib-devel
@end example
@noindent
-When building Gnuastro, run the configure script with the following
-@code{CPPFLAGS} environment variable:
+When building Gnuastro, run the configure script with the following
@code{CPPFLAGS} environment variable:
@example
$ ./configure CPPFLAGS="-I/usr/include/cfitsio"
@@ -5736,20 +4344,14 @@ $ ./configure CPPFLAGS="-I/usr/include/cfitsio"
@c in @command{zypper}. Just make sure it is the most recent version.
@end table
-Usually, when libraries are installed by operating system package managers,
-there should be no problems when configuring and building other programs
-from source (that depend on the libraries: Gnuastro in this case). However,
-in some special conditions, problems may pop-up during the configuration,
-building, or checking/running any of Gnuastro's programs. The most common
-of such problems and their solution are discussed below.
+Usually, when libraries are installed by operating system package managers,
there should be no problems when configuring and building other programs from
source (that depend on the libraries: Gnuastro in this case).
+However, in some special conditions, problems may pop-up during the
configuration, building, or checking/running any of Gnuastro's programs.
+The most common of such problems and their solution are discussed below.
@cartouche
@noindent
-@strong{Not finding library during configuration:} If a library is
-installed, but during Gnuastro's @command{configure} step the library isn't
-found, then configure Gnuastro like the command below (correcting
-@file{/path/to/lib}). For more, see @ref{Known issues} and
-@ref{Installation directory}.
+@strong{Not finding library during configuration:} If a library is installed,
but during Gnuastro's @command{configure} step the library isn't found, then
configure Gnuastro like the command below (correcting @file{/path/to/lib}).
+For more, see @ref{Known issues} and @ref{Installation directory}.
@example
$ ./configure LDFLAGS="-L/path/to/lib"
@end example
@@ -5757,11 +4359,8 @@ $ ./configure LDFLAGS="-L/path/to/lib"
@cartouche
@noindent
-@strong{Not finding header (.h) files while building:} If a library is
-installed, but during Gnuastro's @command{make} step, the library's header
-(file with a @file{.h} suffix) isn't found, then configure Gnuastro like
-the command below (correcting @file{/path/to/include}). For more, see
-@ref{Known issues} and @ref{Installation directory}.
+@strong{Not finding header (.h) files while building:} If a library is
installed, but during Gnuastro's @command{make} step, the library's header
(file with a @file{.h} suffix) isn't found, then configure Gnuastro like the
command below (correcting @file{/path/to/include}).
+For more, see @ref{Known issues} and @ref{Installation directory}.
@example
$ ./configure CPPFLAGS="-I/path/to/include"
@end example
@@ -5769,11 +4368,9 @@ $ ./configure CPPFLAGS="-I/path/to/include"
@cartouche
@noindent
-@strong{Gnuastro's programs don't run during check or after install:} If a
-library is installed, but the programs don't run due to linking problems,
-set the @code{LD_LIBRARY_PATH} variable like below (assuming Gnuastro is
-installed in @file{/path/to/installed}). For more, see @ref{Known issues}
-and @ref{Installation directory}.
+@strong{Gnuastro's programs don't run during check or after install:}
+If a library is installed, but the programs don't run due to linking problems,
set the @code{LD_LIBRARY_PATH} variable like below (assuming Gnuastro is
installed in @file{/path/to/installed}).
+For more, see @ref{Known issues} and @ref{Installation directory}.
@example
$ export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/path/to/installed/lib"
@end example
@@ -5790,15 +4387,12 @@ $ export
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/path/to/installed/lib"
@node Downloading the source, Build and install, Dependencies, Installation
@section Downloading the source
-Gnuastro's source code can be downloaded in two ways. As a tarball, ready
-to be configured and installed on your system (as described in @ref{Quick
-start}), see @ref{Release tarball}. If you want official releases of stable
-versions this is the best, easiest and most common option. Alternatively,
-you can clone the version controlled history of Gnuastro, run one extra
-bootstrapping step and then follow the same steps as the tarball. This will
-give you access to all the most recent work that will be included in the
-next release along with the full project history. The process is thoroughly
-introduced in @ref{Version controlled source}.
+Gnuastro's source code can be downloaded in two ways.
+As a tarball, ready to be configured and installed on your system (as
described in @ref{Quick start}), see @ref{Release tarball}.
+If you want official releases of stable versions this is the best, easiest and
most common option.
+Alternatively, you can clone the version controlled history of Gnuastro, run
one extra bootstrapping step and then follow the same steps as the tarball.
+This will give you access to all the most recent work that will be included in
the next release along with the full project history.
+The process is thoroughly introduced in @ref{Version controlled source}.
@@ -5810,59 +4404,43 @@ introduced in @ref{Version controlled source}.
@node Release tarball, Version controlled source, Downloading the source,
Downloading the source
@subsection Release tarball
-A release tarball (commonly compressed) is the most common way of obtaining
-free and open source software. A tarball is a snapshot of one particular
-moment in the Gnuastro development history along with all the necessary
-files to configure, build, and install Gnuastro easily (see @ref{Quick
-start}). It is very straightforward and needs the least set of dependencies
-(see @ref{Mandatory dependencies}). Gnuastro has tarballs for official
-stable releases and pre-releases for testing. See @ref{Version numbering}
-for more on the two types of releases and the formats of the version
-numbers. The URLs for each type of release are given below.
+A release tarball (commonly compressed) is the most common way of obtaining
free and open source software.
+A tarball is a snapshot of one particular moment in the Gnuastro development
history along with all the necessary files to configure, build, and install
Gnuastro easily (see @ref{Quick start}).
+It is very straightforward and needs the least set of dependencies (see
@ref{Mandatory dependencies}).
+Gnuastro has tarballs for official stable releases and pre-releases for
testing.
+See @ref{Version numbering} for more on the two types of releases and the
formats of the version numbers.
+The URLs for each type of release are given below.
@table @asis
@item Official stable releases (@url{http://ftp.gnu.org/gnu/gnuastro}):
-This URL hosts the official stable releases of Gnuastro. Always use the
-most recent version (see @ref{Version numbering}). By clicking on the
-``Last modified'' title of the second column, the files will be sorted by
-their date which you can also use to find the latest version. It is
-recommended to use a mirror to download these tarballs, please visit
-@url{http://ftpmirror.gnu.org/gnuastro/} and see below.
+This URL hosts the official stable releases of Gnuastro.
+Always use the most recent version (see @ref{Version numbering}).
+By clicking on the ``Last modified'' title of the second column, the files
will be sorted by their date which you can also use to find the latest version.
+It is recommended to use a mirror to download these tarballs, please visit
@url{http://ftpmirror.gnu.org/gnuastro/} and see below.
@item Pre-release tar-balls (@url{http://alpha.gnu.org/gnu/gnuastro}):
-This URL contains unofficial pre-release versions of Gnuastro. The
-pre-release versions of Gnuastro here are for enthusiasts to try out before
-an official release. If there are problems, or bugs then the testers will
-inform the developers to fix before the next official release. See
-@ref{Version numbering} to understand how the version numbers here are
-formatted. If you want to remain even more up-to-date with the developing
-activities, please clone the version controlled source as described in
-@ref{Version controlled source}.
+This URL contains unofficial pre-release versions of Gnuastro.
+The pre-release versions of Gnuastro here are for enthusiasts to try out
before an official release.
+If there are problems, or bugs then the testers will inform the developers to
fix before the next official release.
+See @ref{Version numbering} to understand how the version numbers here are
formatted.
+If you want to remain even more up-to-date with the developing activities,
please clone the version controlled source as described in @ref{Version
controlled source}.
@end table
@cindex Gzip
@cindex Lzip
-Gnuastro's official/stable tarball is released with two formats: Gzip (with
-suffix @file{.tar.gz}) and Lzip (with suffix @file{.tar.lz}). The
-pre-release tarballs (after version 0.3) are released only as an Lzip
-tarball. Gzip is a very well-known and widely used compression program
-created by GNU and available in most systems. However, Lzip provides a
-better compression ratio and more robust archival capacity. For example
-Gnuastro 0.3's tarball was 2.9MB and 4.3MB with Lzip and Gzip respectively,
-see the @url{http://www.nongnu.org/lzip/lzip.html, Lzip webpage} for
-more. Lzip might not be pre-installed in your operating system, if so,
-installing it from your operating system's package manager or from source
-is very easy and fast (it is a very small program).
-
-The GNU FTP server is mirrored (has backups) in various locations on the
-globe (@url{http://www.gnu.org/order/ftp.html}). You can use the closest
-mirror to your location for a more faster download. Note that only some
-mirrors keep track of the pre-release (alpha) tarballs. Also note that if
-you want to download immediately after and announcement (see
-@ref{Announcements}), the mirrors might need some time to synchronize with
-the main GNU FTP server.
+Gnuastro's official/stable tarball is released with two formats: Gzip (with
suffix @file{.tar.gz}) and Lzip (with suffix @file{.tar.lz}).
+The pre-release tarballs (after version 0.3) are released only as an Lzip
tarball.
+Gzip is a very well-known and widely used compression program created by GNU
and available in most systems.
+However, Lzip provides a better compression ratio and more robust archival
capacity.
+For example Gnuastro 0.3's tarball was 2.9MB and 4.3MB with Lzip and Gzip
respectively, see the @url{http://www.nongnu.org/lzip/lzip.html, Lzip webpage}
for more.
+Lzip might not be pre-installed in your operating system, if so, installing it
from your operating system's package manager or from source is very easy and
fast (it is a very small program).
+
+The GNU FTP server is mirrored (has backups) in various locations on the globe
(@url{http://www.gnu.org/order/ftp.html}).
+You can use the closest mirror to your location for a more faster download.
+Note that only some mirrors keep track of the pre-release (alpha) tarballs.
+Also note that if you want to download immediately after and announcement (see
@ref{Announcements}), the mirrors might need some time to synchronize with the
main GNU FTP server.
@node Version controlled source, , Release tarball, Downloading the source
@@ -5870,28 +4448,16 @@ the main GNU FTP server.
@cindex Git
@cindex Version control
-The publicly distributed Gnuastro tar-ball (for example
-@file{gnuastro-X.X.tar.gz}) does not contain the revision history, it is
-only a snapshot of the source code at one significant instant of Gnuastro's
-history (specified by the version number, see @ref{Version numbering}),
-ready to be configured and built. To be able to develop successfully, the
-revision history of the code can be very useful to track when something was
-added or changed, also some updates that are not yet officially released
-might be in it.
-
-We use Git for the version control of Gnuastro. For those who are not
-familiar with it, we recommend the @url{https://git-scm.com/book/en, ProGit
-book}. The whole book is publicly available for online reading and
-downloading and does a wonderful job at explaining the concepts and best
-practices.
-
-Let's assume you want to keep Gnuastro in the @file{TOPGNUASTRO} directory
-(can be any directory, change the value below). The full version controlled
-history of Gnuastro can be cloned in @file{TOPGNUASTRO/gnuastro} by running
-the following commands@footnote{If your internet connection is active, but
-Git complains about the network, it might be due to your network setup not
-recognizing the Git protocol. In that case use the following URL which uses
-the HTTP protocol instead: @command{http://git.sv.gnu.org/r/gnuastro.git}}:
+The publicly distributed Gnuastro tar-ball (for example
@file{gnuastro-X.X.tar.gz}) does not contain the revision history, it is only a
snapshot of the source code at one significant instant of Gnuastro's history
(specified by the version number, see @ref{Version numbering}), ready to be
configured and built.
+To be able to develop successfully, the revision history of the code can be
very useful to track when something was added or changed, also some updates
that are not yet officially released might be in it.
+
+We use Git for the version control of Gnuastro.
+For those who are not familiar with it, we recommend the
@url{https://git-scm.com/book/en, ProGit book}.
+The whole book is publicly available for online reading and downloading and
does a wonderful job at explaining the concepts and best practices.
+
+Let's assume you want to keep Gnuastro in the @file{TOPGNUASTRO} directory
(can be any directory, change the value below).
+The full version controlled history of Gnuastro can be cloned in
@file{TOPGNUASTRO/gnuastro} by running the following commands@footnote{If your
internet connection is active, but Git complains about the network, it might be
due to your network setup not recognizing the Git protocol.
+In that case use the following URL which uses the HTTP protocol instead:
@command{http://git.sv.gnu.org/r/gnuastro.git}}:
@example
$ TOPGNUASTRO=/home/yourname/Research/projects/
@@ -5900,28 +4466,18 @@ $ git clone git://git.sv.gnu.org/gnuastro.git
@end example
@noindent
-The @file{$TOPGNUASTRO/gnuastro} directory will contain hand-written
-(version controlled) source code for Gnuastro's programs, libraries, this
-book and the tests. All are divided into sub-directories with standard and
-very descriptive names. The version controlled files in the top cloned
-directory are either mainly in capital letters (for example @file{THANKS}
-and @file{README}) or mainly written in small-caps (for example
-@file{configure.ac} and @file{Makefile.am}). The former are
-non-programming, standard writing for human readers containing high-level
-information about the whole package. The latter are instructions to
-customize the GNU build system for Gnuastro. For more on Gnuastro's source
-code structure, please see @ref{Developing}. We won't go any deeper here.
+The @file{$TOPGNUASTRO/gnuastro} directory will contain hand-written (version
controlled) source code for Gnuastro's programs, libraries, this book and the
tests.
+All are divided into sub-directories with standard and very descriptive names.
+The version controlled files in the top cloned directory are either mainly in
capital letters (for example @file{THANKS} and @file{README}) or mainly written
in small-caps (for example @file{configure.ac} and @file{Makefile.am}).
+The former are non-programming, standard writing for human readers containing
high-level information about the whole package.
+The latter are instructions to customize the GNU build system for Gnuastro.
+For more on Gnuastro's source code structure, please see @ref{Developing}.
+We won't go any deeper here.
-The cloned Gnuastro source cannot immediately be configured, compiled, or
-installed since it only contains hand-written files, not automatically
-generated or imported files which do all the hard work of the build
-process. See @ref{Bootstrapping} for the process of generating and
-importing those files (its not too hard!). Once you have bootstrapped
-Gnuastro, you can run the standard procedures (in @ref{Quick start}). Very
-soon after you have cloned it, Gnuastro's main @file{master} branch will be
-updated on the main repository (since the developers are actively working
-on Gnuastro), for the best practices in keeping your local history in sync
-with the main repository see @ref{Synchronizing}.
+The cloned Gnuastro source cannot immediately be configured, compiled, or
installed since it only contains hand-written files, not automatically
generated or imported files which do all the hard work of the build process.
+See @ref{Bootstrapping} for the process of generating and importing those
files (its not too hard!).
+Once you have bootstrapped Gnuastro, you can run the standard procedures (in
@ref{Quick start}).
+Very soon after you have cloned it, Gnuastro's main @file{master} branch will
be updated on the main repository (since the developers are actively working on
Gnuastro), for the best practices in keeping your local history in sync with
the main repository see @ref{Synchronizing}.
@@ -5941,63 +4497,37 @@ with the main repository see @ref{Synchronizing}.
@cindex GNU Portability Library (Gnulib)
@cindex Automatically created build files
@noindent
-The version controlled source code lacks the source files that we have not
-written or are automatically built. These automatically generated files are
-included in the distributed tar ball for each distribution (for example
-@file{gnuastro-X.X.tar.gz}, see @ref{Version numbering}) and make it easy
-to immediately configure, build, and install Gnuastro. However from the
-perspective of version control, they are just bloatware and sources of
-confusion (since they are not changed by Gnuastro developers).
-
-The process of automatically building and importing necessary files into
-the cloned directory is known as @emph{bootstrapping}. All the instructions
-for an automatic bootstrapping are available in @file{bootstrap} and
-configured using @file{bootstrap.conf}. @file{bootstrap} and @file{COPYING}
-(which contains the software copyright notice) are the only files not
-written by Gnuastro developers but under version control to enable simple
-bootstrapping and legal information on usage immediately after
-cloning. @file{bootstrap.conf} is maintained by the GNU Portability Library
-(Gnulib) and this file is an identical copy, so do not make any changes in
-this file since it will be replaced when Gnulib releases an update. Make
-all your changes in @file{bootstrap.conf}.
-
-The bootstrapping process has its own separate set of dependencies, the
-full list is given in @ref{Bootstrapping dependencies}. They are generally
-very low-level and used by a very large set of commonly used programs, so
-they are probably already installed on your system. The simplest way to
-bootstrap Gnuastro is to simply run the bootstrap script within your cloned
-Gnuastro directory as shown below. However, please read the next paragraph
-before doing so (see @ref{Version controlled source} for
-@file{TOPGNUASTRO}).
+The version controlled source code lacks the source files that we have not
written or are automatically built.
+These automatically generated files are included in the distributed tar ball
for each distribution (for example @file{gnuastro-X.X.tar.gz}, see @ref{Version
numbering}) and make it easy to immediately configure, build, and install
Gnuastro.
+However from the perspective of version control, they are just bloatware and
sources of confusion (since they are not changed by Gnuastro developers).
+
+The process of automatically building and importing necessary files into the
cloned directory is known as @emph{bootstrapping}.
+All the instructions for an automatic bootstrapping are available in
@file{bootstrap} and configured using @file{bootstrap.conf}.
+@file{bootstrap} and @file{COPYING} (which contains the software copyright
notice) are the only files not written by Gnuastro developers but under version
control to enable simple bootstrapping and legal information on usage
immediately after cloning.
+@file{bootstrap.conf} is maintained by the GNU Portability Library (Gnulib)
and this file is an identical copy, so do not make any changes in this file
since it will be replaced when Gnulib releases an update.
+Make all your changes in @file{bootstrap.conf}.
+
+The bootstrapping process has its own separate set of dependencies, the full
list is given in @ref{Bootstrapping dependencies}.
+They are generally very low-level and used by a very large set of commonly
used programs, so they are probably already installed on your system.
+The simplest way to bootstrap Gnuastro is to simply run the bootstrap script
within your cloned Gnuastro directory as shown below.
+However, please read the next paragraph before doing so (see @ref{Version
controlled source} for @file{TOPGNUASTRO}).
@example
$ cd TOPGNUASTRO/gnuastro
$ ./bootstrap # Requires internet connection
@end example
-Without any options, @file{bootstrap} will clone Gnulib within your cloned
-Gnuastro directory (@file{TOPGNUASTRO/gnuastro/gnulib}) and download the
-necessary Autoconf archives macros. So if you run bootstrap like this, you
-will need an internet connection every time you decide to bootstrap. Also,
-Gnulib is a large package and cloning it can be slow. It will also keep the
-full Gnulib repository within your Gnuastro repository, so if another one
-of your projects also needs Gnulib, and you insist on running bootstrap
-like this, you will have two copies. In case you regularly backup your
-important files, Gnulib will also slow down the backup process. Therefore
-while the simple invocation above can be used with no problem, it is not
-recommended. To do better, see the next paragraph.
-
-The recommended way to get these two packages is thoroughly discussed in
-@ref{Bootstrapping dependencies} (in short: clone them in the separate
-@file{DEVDIR/} directory). The following commands will take you into the
-cloned Gnuastro directory and run the @file{bootstrap} script, while
-telling it to copy some files (instead of making symbolic links, with the
-@option{--copy} option, this is not mandatory@footnote{The @option{--copy}
-option is recommended because some backup systems might do strange things
-with symbolic links.}) and where to look for Gnulib (with the
-@option{--gnulib-srcdir} option). Please note that the address given to
-@option{--gnulib-srcdir} has to be an absolute address (so don't use
-@file{~} or @file{../} for example).
+Without any options, @file{bootstrap} will clone Gnulib within your cloned
Gnuastro directory (@file{TOPGNUASTRO/gnuastro/gnulib}) and download the
necessary Autoconf archives macros.
+So if you run bootstrap like this, you will need an internet connection every
time you decide to bootstrap.
+Also, Gnulib is a large package and cloning it can be slow.
+It will also keep the full Gnulib repository within your Gnuastro repository,
so if another one of your projects also needs Gnulib, and you insist on running
bootstrap like this, you will have two copies.
+In case you regularly backup your important files, Gnulib will also slow down
the backup process.
+Therefore while the simple invocation above can be used with no problem, it is
not recommended.
+To do better, see the next paragraph.
+
+The recommended way to get these two packages is thoroughly discussed in
@ref{Bootstrapping dependencies} (in short: clone them in the separate
@file{DEVDIR/} directory).
+The following commands will take you into the cloned Gnuastro directory and
run the @file{bootstrap} script, while telling it to copy some files (instead
of making symbolic links, with the @option{--copy} option, this is not
mandatory@footnote{The @option{--copy} option is recommended because some
backup systems might do strange things with symbolic links.}) and where to look
for Gnulib (with the @option{--gnulib-srcdir} option).
+Please note that the address given to @option{--gnulib-srcdir} has to be an
absolute address (so don't use @file{~} or @file{../} for example).
@example
$ cd $TOPGNUASTRO/gnuastro
@@ -6011,68 +4541,49 @@ $ ./bootstrap --copy --gnulib-srcdir=$DEVDIR/gnulib
@cindex GNU Automake
@cindex GNU C library
@cindex GNU build system
-Since Gnulib and Autoconf archives are now available in your local
-directories, you don't need an internet connection every time you decide to
-remove all untracked files and redo the bootstrap (see box below). You can
-also use the same command on any other project that uses Gnulib. All the
-necessary GNU C library functions, Autoconf macros and Automake inputs are
-now available along with the book figures. The standard GNU build system
-(@ref{Quick start}) will do the rest of the job.
+Since Gnulib and Autoconf archives are now available in your local
directories, you don't need an internet connection every time you decide to
remove all untracked files and redo the bootstrap (see box below).
+You can also use the same command on any other project that uses Gnulib.
+All the necessary GNU C library functions, Autoconf macros and Automake inputs
are now available along with the book figures.
+The standard GNU build system (@ref{Quick start}) will do the rest of the job.
@cartouche
@noindent
-@strong{Undoing the bootstrap:} During the development, it might happen
-that you want to remove all the automatically generated and imported
-files. In other words, you might want to reverse the bootstrap
-process. Fortunately Git has a good program for this job: @command{git
-clean}. Run the following command and every file that is not version
-controlled will be removed.
+@strong{Undoing the bootstrap:}
+During the development, it might happen that you want to remove all the
automatically generated and imported files.
+In other words, you might want to reverse the bootstrap process.
+Fortunately Git has a good program for this job: @command{git clean}.
+Run the following command and every file that is not version controlled will
be removed.
@example
git clean -fxd
@end example
@noindent
-It is best to commit any recent change before running this
-command. You might have created new files since the last commit and if
-they haven't been committed, they will all be gone forever (using
-@command{rm}). To get a list of the non-version controlled files
-instead of deleting them, add the @option{n} option to @command{git
-clean}, so it becomes @option{-fxdn}.
+It is best to commit any recent change before running this command.
+You might have created new files since the last commit and if they haven't
been committed, they will all be gone forever (using @command{rm}).
+To get a list of the non-version controlled files instead of deleting them,
add the @option{n} option to @command{git clean}, so it becomes @option{-fxdn}.
@end cartouche
-Besides the @file{bootstrap} and @file{bootstrap.conf}, the
-@file{bootstrapped/} directory and @file{README-hacking} file are also
-related to the bootstrapping process. The former hosts all the imported
-(bootstrapped) directories. Thus, in the version controlled source, it only
-contains a @file{REAME} file, but in the distributed tar-ball it also
-contains sub-directories filled with all bootstrapped
-files. @file{README-hacking} contains a summary of the bootstrapping
-process discussed in this section. It is a necessary reference when you
-haven't built this book yet. It is thus not distributed in the Gnuastro
-tarball.
+Besides the @file{bootstrap} and @file{bootstrap.conf}, the
@file{bootstrapped/} directory and @file{README-hacking} file are also related
to the bootstrapping process.
+The former hosts all the imported (bootstrapped) directories.
+Thus, in the version controlled source, it only contains a @file{REAME} file,
but in the distributed tar-ball it also contains sub-directories filled with
all bootstrapped files.
+@file{README-hacking} contains a summary of the bootstrapping process
discussed in this section.
+It is a necessary reference when you haven't built this book yet.
+It is thus not distributed in the Gnuastro tarball.
@node Synchronizing, , Bootstrapping, Version controlled source
@subsubsection Synchronizing
-The bootstrapping script (see @ref{Bootstrapping}) is not regularly needed:
-you mainly need it after you have cloned Gnuastro (once) and whenever you
-want to re-import the files from Gnulib, or Autoconf
-archives@footnote{@url{https://savannah.gnu.org/task/index.php?13993} is
-defined for you to check if significant (for Gnuastro) updates are made in
-these repositories, since the last time you pulled from them.} (not too
-common). However, Gnuastro developers are constantly working on Gnuastro
-and are pushing their changes to the official repository. Therefore, your
-local Gnuastro clone will soon be out-dated. Gnuastro has two mailing lists
-dedicated to its developing activities (see @ref{Developing mailing
-lists}). Subscribing to them can help you decide when to synchronize with
-the official repository.
-
-To pull all the most recent work in Gnuastro, run the following command
-from the top Gnuastro directory. If you don't already have a built system,
-ignore @command{make distclean}. The separate steps are described in detail
-afterwards.
+The bootstrapping script (see @ref{Bootstrapping}) is not regularly needed:
you mainly need it after you have cloned Gnuastro (once) and whenever you want
to re-import the files from Gnulib, or Autoconf
archives@footnote{@url{https://savannah.gnu.org/task/index.php?13993} is
defined for you to check if significant (for Gnuastro) updates are made in
these repositories, since the last time you pulled from them.} (not too common).
+However, Gnuastro developers are constantly working on Gnuastro and are
pushing their changes to the official repository.
+Therefore, your local Gnuastro clone will soon be out-dated.
+Gnuastro has two mailing lists dedicated to its developing activities (see
@ref{Developing mailing lists}).
+Subscribing to them can help you decide when to synchronize with the official
repository.
+
+To pull all the most recent work in Gnuastro, run the following command from
the top Gnuastro directory.
+If you don't already have a built system, ignore @command{make distclean}.
+The separate steps are described in detail afterwards.
@example
$ make distclean && git pull && autoreconf -f
@@ -6090,40 +4601,25 @@ $ autoreconf -f
@cindex GNU Autoconf
@cindex Mailing list: info-gnuastro
@cindex @code{info-gnuastro@@gnu.org}
-If Gnuastro was already built in this directory, you don't want some
-outputs from the previous version being mixed with outputs from the newly
-pulled work. Therefore, the first step is to clean/delete all the built
-files with @command{make distclean}. Fortunately the GNU build system
-allows the separation of source and built files (in separate
-directories). This is a great feature to keep your source directory clean
-and you can use it to avoid the cleaning step. Gnuastro comes with a script
-with some useful options for this job. It is useful if you regularly pull
-recent changes, see @ref{Separate build and source directories}.
-
-After the pull, we must re-configure Gnuastro with @command{autoreconf -f}
-(part of GNU Autoconf). It will update the @file{./configure} script and
-all the @file{Makefile.in}@footnote{In the GNU build system,
-@command{./configure} will use the @file{Makefile.in} files to create the
-necessary @file{Makefile} files that are later read by @command{make} to
-build the package.} files based on the hand-written configurations (in
-@file{configure.ac} and the @file{Makefile.am} files). After running
-@command{autoreconf -f}, a warning about @code{TEXI2DVI} might show up, you
-can ignore that.
-
-The most important reason for re-building Gnuastro's build system is to
-generate/update the version number for your updated Gnuastro snapshot. This
-generated version number will include the commit information (see
-@ref{Version numbering}). The version number is included in nearly all
-outputs of Gnuastro's programs, therefore it is vital for reproducing an
-old result.
-
-As a summary, be sure to run `@command{autoreconf -f}' after every change
-in the Git history. This includes synchronization with the main server or
-even a commit you have made yourself.
-
-If you would like to see what has changed since you last synchronized your
-local clone, you can take the following steps instead of the simple command
-above (don't type anything after @code{#}):
+If Gnuastro was already built in this directory, you don't want some outputs
from the previous version being mixed with outputs from the newly pulled work.
+Therefore, the first step is to clean/delete all the built files with
@command{make distclean}.
+Fortunately the GNU build system allows the separation of source and built
files (in separate directories).
+This is a great feature to keep your source directory clean and you can use it
to avoid the cleaning step.
+Gnuastro comes with a script with some useful options for this job.
+It is useful if you regularly pull recent changes, see @ref{Separate build and
source directories}.
+
+After the pull, we must re-configure Gnuastro with @command{autoreconf -f}
(part of GNU Autoconf).
+It will update the @file{./configure} script and all the
@file{Makefile.in}@footnote{In the GNU build system, @command{./configure} will
use the @file{Makefile.in} files to create the necessary @file{Makefile} files
that are later read by @command{make} to build the package.} files based on the
hand-written configurations (in @file{configure.ac} and the @file{Makefile.am}
files).
+After running @command{autoreconf -f}, a warning about @code{TEXI2DVI} might
show up, you can ignore that.
+
+The most important reason for re-building Gnuastro's build system is to
generate/update the version number for your updated Gnuastro snapshot.
+This generated version number will include the commit information (see
@ref{Version numbering}).
+The version number is included in nearly all outputs of Gnuastro's programs,
therefore it is vital for reproducing an old result.
+
+As a summary, be sure to run `@command{autoreconf -f}' after every change in
the Git history.
+This includes synchronization with the main server or even a commit you have
made yourself.
+
+If you would like to see what has changed since you last synchronized your
local clone, you can take the following steps instead of the simple command
above (don't type anything after @code{#}):
@example
$ git checkout master # Confirm if you are on master.
@@ -6134,22 +4630,14 @@ $ autoreconf -f # Update the build
system.
@end example
@noindent
-By default @command{git log} prints the most recent commit first, add the
-@option{--reverse} option to see the changes chronologically. To see
-exactly what has been changed in the source code along with the commit
-message, add a @option{-p} option to the @command{git log}.
+By default @command{git log} prints the most recent commit first, add the
@option{--reverse} option to see the changes chronologically.
+To see exactly what has been changed in the source code along with the commit
message, add a @option{-p} option to the @command{git log}.
-If you want to make changes in the code, have a look at @ref{Developing} to
-get started easily. Be sure to commit your changes in a separate branch
-(keep your @code{master} branch to follow the official repository) and
-re-run @command{autoreconf -f} after the commit. If you intend to send your
-work to us, you can safely use your commit since it will be ultimately
-recorded in Gnuastro's official history. If not, please upload your
-separate branch to a public hosting service, for example
-@url{https://gitlab.com, GitLab}, and link to it in your
-report/paper. Alternatively, run @command{make distcheck} and upload the
-output @file{gnuastro-X.X.X.XXXX.tar.gz} to a publicly accessible webpage
-so your results can be considered scientific (reproducible) later.
+If you want to make changes in the code, have a look at @ref{Developing} to
get started easily.
+Be sure to commit your changes in a separate branch (keep your @code{master}
branch to follow the official repository) and re-run @command{autoreconf -f}
after the commit.
+If you intend to send your work to us, you can safely use your commit since it
will be ultimately recorded in Gnuastro's official history.
+If not, please upload your separate branch to a public hosting service, for
example @url{https://gitlab.com, GitLab}, and link to it in your report/paper.
+Alternatively, run @command{make distcheck} and upload the output
@file{gnuastro-X.X.X.XXXX.tar.gz} to a publicly accessible webpage so your
results can be considered scientific (reproducible) later.
@@ -6166,25 +4654,12 @@ so your results can be considered scientific
(reproducible) later.
@node Build and install, , Downloading the source, Installation
@section Build and install
-This section is basically a longer explanation to the sequence of commands
-given in @ref{Quick start}. If you didn't have any problems during the
-@ref{Quick start} steps, you want to have all the programs of Gnuastro
-installed in your system, you don't want to change the executable names
-during or after installation, you have root access to install the programs
-in the default system wide directory, the Letter paper size of the print
-book is fine for you or as a summary you don't feel like going into the
-details when everything is working, you can safely skip this section.
-
-If you have any of the above problems or you want to understand the details
-for a better control over your build and install, read along. The
-dependencies which you will need prior to configuring, building and
-installing Gnuastro are explained in @ref{Dependencies}. The first three
-steps in @ref{Quick start} need no extra explanation, so we will skip them
-and start with an explanation of Gnuastro specific configuration options
-and a discussion on the installation directory in @ref{Configuring},
-followed by some smaller subsections: @ref{Tests}, @ref{A4 print book}, and
-@ref{Known issues} which explains the solutions to known problems you might
-encounter in the installation steps and ways you can solve them.
+This section is basically a longer explanation to the sequence of commands
given in @ref{Quick start}.
+If you didn't have any problems during the @ref{Quick start} steps, you want
to have all the programs of Gnuastro installed in your system, you don't want
to change the executable names during or after installation, you have root
access to install the programs in the default system wide directory, the Letter
paper size of the print book is fine for you or as a summary you don't feel
like going into the details when everything is working, you can safely skip
this section.
+
+If you have any of the above problems or you want to understand the details
for a better control over your build and install, read along.
+The dependencies which you will need prior to configuring, building and
installing Gnuastro are explained in @ref{Dependencies}.
+The first three steps in @ref{Quick start} need no extra explanation, so we
will skip them and start with an explanation of Gnuastro specific configuration
options and a discussion on the installation directory in @ref{Configuring},
followed by some smaller subsections: @ref{Tests}, @ref{A4 print book}, and
@ref{Known issues} which explains the solutions to known problems you might
encounter in the installation steps and ways you can solve them.
@menu
@@ -6204,21 +4679,16 @@ encounter in the installation steps and ways you can
solve them.
@pindex ./configure
@cindex Configuring
-The @command{$ ./configure} step is the most important step in the
-build and install process. All the required packages, libraries,
-headers and environment variables are checked in this step. The
-behaviors of make and make install can also be set through command
-line options to this command.
+The @command{$ ./configure} step is the most important step in the build and
install process.
+All the required packages, libraries, headers and environment variables are
checked in this step.
+The behaviors of make and make install can also be set through command line
options to this command.
@cindex Configure options
@cindex Customizing installation
@cindex Installation, customizing
-The configure script accepts various arguments and options which
-enable the final user to highly customize whatever she is
-building. The options to configure are generally very similar to
-normal program options explained in @ref{Arguments and
-options}. Similar to all GNU programs, you can get a full list of the
-options along with a short explanation by running
+The configure script accepts various arguments and options which enable the
final user to highly customize whatever she is building.
+The options to configure are generally very similar to normal program options
explained in @ref{Arguments and options}.
+Similar to all GNU programs, you can get a full list of the options along with
a short explanation by running
@example
$ ./configure --help
@@ -6226,14 +4696,10 @@ $ ./configure --help
@noindent
@cindex GNU Autoconf
-A complete explanation is also included in the @file{INSTALL} file. Note
-that this file was written by the authors of GNU Autoconf (which builds the
-@file{configure} script), therefore it is common for all programs which use
-the @command{$ ./configure} script for building and installing, not just
-Gnuastro. Here we only discuss cases where you don't have super-user access
-to the system and if you want to change the executable names. But before
-that, a review of the options to configure that are particular to Gnuastro
-are discussed.
+A complete explanation is also included in the @file{INSTALL} file.
+Note that this file was written by the authors of GNU Autoconf (which builds
the @file{configure} script), therefore it is common for all programs which use
the @command{$ ./configure} script for building and installing, not just
Gnuastro.
+Here we only discuss cases where you don't have super-user access to the
system and if you want to change the executable names.
+But before that, a review of the options to configure that are particular to
Gnuastro are discussed.
@menu
* Gnuastro configure options:: Configure options particular to Gnuastro.
@@ -6247,11 +4713,9 @@ are discussed.
@cindex @command{./configure} options
@cindex Configure options particular to Gnuastro
-Most of the options to configure (which are to do with building) are
-similar for every program which uses this script. Here the options
-that are particular to Gnuastro are discussed. The next topics explain
-the usage of other configure options which can be applied to any
-program using the GNU build system (through the configure script).
+Most of the options to configure (which are to do with building) are similar
for every program which uses this script.
+Here the options that are particular to Gnuastro are discussed.
+The next topics explain the usage of other configure options which can be
applied to any program using the GNU build system (through the configure
script).
@vtable @option
@@ -6259,111 +4723,102 @@ program using the GNU build system (through the
configure script).
@cindex Valgrind
@cindex Debugging
@cindex GNU Debugger
-Compile/build Gnuastro with debugging information, no optimization and
-without shared libraries.
-
-In order to allow more efficient programs when using Gnuastro (after the
-installation), by default Gnuastro is built with a 3rd level (a very high
-level) optimization and no debugging information. By default, libraries are
-also built for static @emph{and} shared linking (see
-@ref{Linking}). However, when there are crashes or unexpected behavior,
-these three features can hinder the process of localizing the problem. This
-configuration option is identical to manually calling the configuration
-script with @code{CFLAGS="-g -O0" --disable-shared}.
-
-In the (rare) situations where you need to do your debugging on the shared
-libraries, don't use this option. Instead run the configure script by
-explicitly setting @code{CFLAGS} like this:
+Compile/build Gnuastro with debugging information, no optimization and without
shared libraries.
+
+In order to allow more efficient programs when using Gnuastro (after the
installation), by default Gnuastro is built with a 3rd level (a very high
level) optimization and no debugging information.
+By default, libraries are also built for static @emph{and} shared linking (see
@ref{Linking}).
+However, when there are crashes or unexpected behavior, these three features
can hinder the process of localizing the problem.
+This configuration option is identical to manually calling the configuration
script with @code{CFLAGS="-g -O0" --disable-shared}.
+
+In the (rare) situations where you need to do your debugging on the shared
libraries, don't use this option.
+Instead run the configure script by explicitly setting @code{CFLAGS} like this:
@example
$ ./configure CFLAGS="-g -O0"
@end example
@item --enable-check-with-valgrind
@cindex Valgrind
-Do the @command{make check} tests through Valgrind. Therefore, if any
-crashes or memory-related issues (segmentation faults in particular) occur
-in the tests, the output of Valgrind will also be put in the
-@file{tests/test-suite.log} file without having to manually modify the
-check scripts. This option will also activate Gnuastro's debug mode (see
-the @option{--enable-debug} configure-time option described above).
-
-Valgrind is free software. It is a program for easy checking of
-memory-related issues in programs. It runs a program within its own
-controlled environment and can thus identify the exact line-number in the
-program's source where a memory-related issue occurs. However, it can
-significantly slow-down the tests. So this option is only useful when a
-segmentation fault is found during @command{make check}.
+Do the @command{make check} tests through Valgrind.
+Therefore, if any crashes or memory-related issues (segmentation faults in
particular) occur in the tests, the output of Valgrind will also be put in the
@file{tests/test-suite.log} file without having to manually modify the check
scripts.
+This option will also activate Gnuastro's debug mode (see the
@option{--enable-debug} configure-time option described above).
+
+Valgrind is free software.
+It is a program for easy checking of memory-related issues in programs.
+It runs a program within its own controlled environment and can thus identify
the exact line-number in the program's source where a memory-related issue
occurs.
+However, it can significantly slow-down the tests.
+So this option is only useful when a segmentation fault is found during
@command{make check}.
@item --enable-progname
-Only build and install @file{progname} along with any other program that is
-enabled in this fashion. @file{progname} is the name of the executable
-without the @file{ast}, for example @file{crop} for Crop (with the
-executable name of @file{astcrop}).
+Only build and install @file{progname} along with any other program that is
enabled in this fashion.
+@file{progname} is the name of the executable without the @file{ast}, for
example @file{crop} for Crop (with the executable name of @file{astcrop}).
-Note that by default all the programs will be installed. This option (and
-the @option{--disable-progname} options) are only relevant when you don't
-want to install all the programs. Therefore, if this option is called for
-any of the programs in Gnuastro, any program which is not explicitly
-enabled will not be built or installed.
+Note that by default all the programs will be installed.
+This option (and the @option{--disable-progname} options) are only relevant
when you don't want to install all the programs.
+Therefore, if this option is called for any of the programs in Gnuastro, any
program which is not explicitly enabled will not be built or installed.
@item --disable-progname
@itemx --enable-progname=no
-Do not build or install the program named @file{progname}. This is
-very similar to the @option{--enable-progname}, but will build and
-install all the other programs except this one.
+Do not build or install the program named @file{progname}.
+This is very similar to the @option{--enable-progname}, but will build and
install all the other programs except this one.
+
+@cartouche
+@noindent
+@strong{Note:} If some programs are enabled and some are disabled, it is
equivalent to simply enabling those that were enabled.
+Listing the disabled programs is redundant.
+@end cartouche
@item --enable-gnulibcheck
@cindex GNU C library
@cindex Gnulib: GNU Portability Library
@cindex GNU Portability Library (Gnulib)
-Enable checks on the GNU Portability Library (Gnulib). Gnulib is used
-by Gnuastro to enable users of non-GNU based operating systems (that
-don't use GNU C library or glibc) to compile and use the advanced
-features that this library provides. We make extensive use of such
-functions. If you give this option to @command{$ ./configure}, when
-you run @command{$ make check}, first the functions in Gnulib will be
-tested, then the Gnuastro executables. If your operating system does
-not support glibc or has an older version of it and you have problems
-in the build process (@command{$ make}), you can give this flag to
-configure to see if the problem is caused by Gnulib not supporting
-your operating system or Gnuastro, see @ref{Known issues}.
+Enable checks on the GNU Portability Library (Gnulib).
+Gnulib is used by Gnuastro to enable users of non-GNU based operating systems
(that don't use GNU C library or glibc) to compile and use the advanced
features that this library provides.
+We make extensive use of such functions.
+If you give this option to @command{$ ./configure}, when you run @command{$
make check}, first the functions in Gnulib will be tested, then the Gnuastro
executables.
+If your operating system does not support glibc or has an older version of it
and you have problems in the build process (@command{$ make}), you can give
this flag to configure to see if the problem is caused by Gnulib not supporting
your operating system or Gnuastro, see @ref{Known issues}.
@item --disable-guide-message
@itemx --enable-guide-message=no
-Do not print a guiding message during the GNU Build process of @ref{Quick
-start}. By default, after each step, a message is printed guiding the user
-what the next command should be. Therefore, after @command{./configure}, it
-will suggest running @command{make}. After @command{make}, it will suggest
-running @command{make check} and so on. If Gnuastro is configured with this
-option, for example
+Do not print a guiding message during the GNU Build process of @ref{Quick
start}.
+By default, after each step, a message is printed guiding the user what the
next command should be.
+Therefore, after @command{./configure}, it will suggest running @command{make}.
+After @command{make}, it will suggest running @command{make check} and so on.
+If Gnuastro is configured with this option, for example
@example
$ ./configure --disable-guide-message
@end example
-Then these messages will not be printed after any step (like most
-programs). For people who are not yet fully accustomed to this build
-system, these guidelines can be very useful and encouraging. However, if
-you find those messages annoying, use this option.
+Then these messages will not be printed after any step (like most programs).
+For people who are not yet fully accustomed to this build system, these
guidelines can be very useful and encouraging.
+However, if you find those messages annoying, use this option.
+@item --without-libgit2
+@cindex Git
+@pindex libgit2
+@cindex Version control systems
+Build Gnuastro without libgit2 (for including Git commit hashes in output
files), see @ref{Optional dependencies}.
+libgit2 is an optional dependency, with this option, Gnuastro will ignore any
possibly existing libgit2 that may already be on the system.
-@end vtable
+@item --without-libjpeg
+@pindex libjpeg
+@cindex JPEG format
+Build Gnuastro without libjpeg (for reading/writing to JPEG files), see
@ref{Optional dependencies}.
+libjpeg is an optional dependency, with this option, Gnuastro will ignore any
possibly existing libjpeg that may already be on the system.
-@cartouche
-@noindent
-@strong{Note:} If some programs are enabled and some are disabled, it
-is equivalent to simply enabling those that were enabled. Listing the
-disabled programs is redundant.
-@end cartouche
+@item --without-libtiff
+@pindex libtiff
+@cindex TIFF format
+Build Gnuastro without libtiff (for reading/writing to TIFF files), see
@ref{Optional dependencies}.
+libtiff is an optional dependency, with this option, Gnuastro will ignore any
possibly existing libtiff that may already be on the system.
+
+@end vtable
-The tests of some programs might depend on the outputs of the tests of
-other programs. For example MakeProfiles is one the first programs to be
-tested when you run @command{$ make check}. MakeProfiles' test outputs
-(FITS images) are inputs to many other programs (which in turn provide
-inputs for other programs). Therefore, if you don't install MakeProfiles
-for example, the tests for many the other programs will be skipped. To
-avoid this, in one run, you can install all the programs and run the tests
-but not install. If everything is working correctly, you can run configure
-again with only the programs you want. However, don't run the tests and
-directly install after building.
+The tests of some programs might depend on the outputs of the tests of other
programs.
+For example MakeProfiles is one the first programs to be tested when you run
@command{$ make check}.
+MakeProfiles' test outputs (FITS images) are inputs to many other programs
(which in turn provide inputs for other programs).
+Therefore, if you don't install MakeProfiles for example, the tests for many
the other programs will be skipped.
+To avoid this, in one run, you can install all the programs and run the tests
but not install.
+If everything is working correctly, you can run configure again with only the
programs you want.
+However, don't run the tests and directly install after building.
@@ -6375,35 +4830,18 @@ directly install after building.
@cindex Root access, not possible
@cindex No access to super-user install
@cindex Install with no super-user access
-One of the most commonly used options to @file{./configure} is
-@option{--prefix}, it is used to define the directory that will host all
-the installed files (or the ``prefix'' in their final absolute file
-name). For example, when you are using a server and you don't have
-administrator or root access. In this example scenario, if you don't use
-the @option{--prefix} option, you won't be able to install the built files
-and thus access them from anywhere without having to worry about where they
-are installed. However, once you prepare your startup file to look into the
-proper place (as discussed thoroughly below), you will be able to easily
-use this option and benefit from any software you want to install without
-having to ask the system administrators or install and use a different
-version of a software that is already installed on the server.
-
-The most basic way to run an executable is to explicitly write its full
-file name (including all the directory information) and run it. One example
-is running the configuration script with the @command{$ ./configure}
-command (see @ref{Quick start}). By giving a specific directory (the
-current directory or @file{./}), we are explicitly telling the shell to
-look in the current directory for an executable file named
-`@file{configure}'. Directly specifying the directory is thus useful for
-executables in the current (or nearby) directories. However, when the
-program (an executable file) is to be used a lot, specifying all those
-directories will become a significant burden. For example, the @file{ls}
-executable lists the contents in a given directory and it is (usually)
-installed in the @file{/usr/bin/} directory by the operating system
-maintainers. Therefore, if using the full address was the only way to
-access an executable, each time you wanted a listing of a directory, you
-would have to run the following command (which is very inconvenient, both
-in writing and in remembering the various directories).
+One of the most commonly used options to @file{./configure} is
@option{--prefix}, it is used to define the directory that will host all the
installed files (or the ``prefix'' in their final absolute file name).
+For example, when you are using a server and you don't have administrator or
root access.
+In this example scenario, if you don't use the @option{--prefix} option, you
won't be able to install the built files and thus access them from anywhere
without having to worry about where they are installed.
+However, once you prepare your startup file to look into the proper place (as
discussed thoroughly below), you will be able to easily use this option and
benefit from any software you want to install without having to ask the system
administrators or install and use a different version of a software that is
already installed on the server.
+
+The most basic way to run an executable is to explicitly write its full file
name (including all the directory information) and run it.
+One example is running the configuration script with the @command{$
./configure} command (see @ref{Quick start}).
+By giving a specific directory (the current directory or @file{./}), we are
explicitly telling the shell to look in the current directory for an executable
file named `@file{configure}'.
+Directly specifying the directory is thus useful for executables in the
current (or nearby) directories.
+However, when the program (an executable file) is to be used a lot, specifying
all those directories will become a significant burden.
+For example, the @file{ls} executable lists the contents in a given directory
and it is (usually) installed in the @file{/usr/bin/} directory by the
operating system maintainers.
+Therefore, if using the full address was the only way to access an executable,
each time you wanted a listing of a directory, you would have to run the
following command (which is very inconvenient, both in writing and in
remembering the various directories).
@example
$ /usr/bin/ls
@@ -6411,21 +4849,18 @@ $ /usr/bin/ls
@cindex Shell variables
@cindex Environment variables
-To address this problem, we have the @file{PATH} environment variable. To
-understand it better, we will start with a short introduction to the shell
-variables. Shell variable values are basically treated as strings of
-characters. For example, it doesn't matter if the value is a name (string
-of @emph{alphabetic} characters), or a number (string of @emph{numeric}
-characters), or both. You can define a variable and a value for it by
-running
+To address this problem, we have the @file{PATH} environment variable.
+To understand it better, we will start with a short introduction to the shell
variables.
+Shell variable values are basically treated as strings of characters.
+For example, it doesn't matter if the value is a name (string of
@emph{alphabetic} characters), or a number (string of @emph{numeric}
characters), or both.
+You can define a variable and a value for it by running
@example
$ myvariable1=a_test_value
$ myvariable2="a test value"
@end example
@noindent
-As you see above, if the value contains white space characters, you have to
-put the whole value (including white space characters) in double quotes
-(@key{"}). You can see the value it represents by running
+As you see above, if the value contains white space characters, you have to
put the whole value (including white space characters) in double quotes
(@key{"}).
+You can see the value it represents by running
@example
$ echo $myvariable1
$ echo $myvariable2
@@ -6433,19 +4868,13 @@ $ echo $myvariable2
@noindent
@cindex Environment
@cindex Environment variables
-If a variable has no value or it wasn't defined, the last command will only
-print an empty line. A variable defined like this will be known as long as
-this shell or terminal is running. Other terminals will have no idea it
-existed. The main advantage of shell variables is that if they are
-exported@footnote{By running @command{$ export myvariable=a_test_value}
-instead of the simpler case in the text}, subsequent programs that are run
-within that shell can access their value. So by changing their value, you
-can change the ``environment'' of a program which uses them. The shell
-variables which are accessed by programs are therefore known as
-``environment variables''@footnote{You can use shell variables for other
-actions too, for example to temporarily keep some names or run loops on
-some files.}. You can see the full list of exported variables that your
-shell recognizes by running:
+If a variable has no value or it wasn't defined, the last command will only
print an empty line.
+A variable defined like this will be known as long as this shell or terminal
is running.
+Other terminals will have no idea it existed.
+The main advantage of shell variables is that if they are exported@footnote{By
running @command{$ export myvariable=a_test_value} instead of the simpler case
in the text}, subsequent programs that are run within that shell can access
their value.
+So by changing their value, you can change the ``environment'' of a program
which uses them.
+The shell variables which are accessed by programs are therefore known as
``environment variables''@footnote{You can use shell variables for other
actions too, for example to temporarily keep some names or run loops on some
files.}.
+You can see the full list of exported variables that your shell recognizes by
running:
@example
$ printenv
@@ -6454,58 +4883,36 @@ $ printenv
@cindex @file{HOME}
@cindex @file{HOME/.local/}
@cindex Environment variable, @code{HOME}
-@file{HOME} is one commonly used environment variable, it is any user's
-(the one that is logged in) top directory. Try finding it in the command
-above. It is used so often that the shell has a special expansion
-(alternative) for it: `@file{~}'. Whenever you see file names starting with
-the tilde sign, it actually represents the value to the @file{HOME}
-environment variable, so @file{~/doc} is the same as @file{$HOME/doc}.
+@file{HOME} is one commonly used environment variable, it is any user's (the
one that is logged in) top directory.
+Try finding it in the command above.
+It is used so often that the shell has a special expansion (alternative) for
it: `@file{~}'.
+Whenever you see file names starting with the tilde sign, it actually
represents the value to the @file{HOME} environment variable, so @file{~/doc}
is the same as @file{$HOME/doc}.
@vindex PATH
@pindex ./configure
@cindex Setting @code{PATH}
@cindex Default executable search directory
@cindex Search directory for executables
-Another one of the most commonly used environment variables is @file{PATH},
-it is a list of directories to search for executable names. Its value is a
-list of directories (separated by a colon, or `@key{:}'). When the address
-of the executable is not explicitly given (like @file{./configure} above),
-the system will look for the executable in the directories specified by
-@file{PATH}. If you have a computer nearby, try running the following
-command to see which directories your system will look into when it is
-searching for executable (binary) files, one example is printed here
-(notice how @file{/usr/bin}, in the @file{ls} example above, is one of the
-directories in @command{PATH}):
+Another one of the most commonly used environment variables is @file{PATH}, it
is a list of directories to search for executable names.
+Its value is a list of directories (separated by a colon, or `@key{:}').
+When the address of the executable is not explicitly given (like
@file{./configure} above), the system will look for the executable in the
directories specified by @file{PATH}.
+If you have a computer nearby, try running the following command to see which
directories your system will look into when it is searching for executable
(binary) files, one example is printed here (notice how @file{/usr/bin}, in the
@file{ls} example above, is one of the directories in @command{PATH}):
@example
$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/bin
@end example
-By default @file{PATH} usually contains system-wide directories, which are
-readable (but not writable) by all users, like the above example. Therefore
-if you don't have root (or administrator) access, you need to add another
-directory to @file{PATH} which you actually have write access to. The
-standard directory where you can keep installed files (not just
-executables) for your own user is the @file{~/.local/} directory. The names
-of hidden files start with a `@key{.}' (dot), so it will not show up in
-your common command-line listings, or on the graphical user interface. You
-can use any other directory, but this is the most recognized.
-
-The top installation directory will be used to keep all the package's
-components: programs (executables), libraries, include (header) files,
-shared data (like manuals), or configuration files (see @ref{Review of
-library fundamentals} for a thorough introduction to headers and
-linking). So it commonly has some of the following sub-directories for each
-class of installed components respectively: @file{bin/}, @file{lib/},
-@file{include/} @file{man/}, @file{share/}, @file{etc/}. Since the
-@file{PATH} variable is only used for executables, you can add the
-@file{~/.local/bin} directory (which keeps the executables/programs or more
-generally, ``binary'' files) to @file{PATH} with the following command. As
-defined below, first the existing value of @file{PATH} is used, then your
-given directory is added to its end and the combined value is put back in
-@file{PATH} (run `@command{$ echo $PATH}' afterwards to check if it was
-added).
+By default @file{PATH} usually contains system-wide directories, which are
readable (but not writable) by all users, like the above example.
+Therefore if you don't have root (or administrator) access, you need to add
another directory to @file{PATH} which you actually have write access to.
+The standard directory where you can keep installed files (not just
executables) for your own user is the @file{~/.local/} directory.
+The names of hidden files start with a `@key{.}' (dot), so it will not show up
in your common command-line listings, or on the graphical user interface.
+You can use any other directory, but this is the most recognized.
+
+The top installation directory will be used to keep all the package's
components: programs (executables), libraries, include (header) files, shared
data (like manuals), or configuration files (see @ref{Review of library
fundamentals} for a thorough introduction to headers and linking).
+So it commonly has some of the following sub-directories for each class of
installed components respectively: @file{bin/}, @file{lib/}, @file{include/}
@file{man/}, @file{share/}, @file{etc/}.
+Since the @file{PATH} variable is only used for executables, you can add the
@file{~/.local/bin} directory (which keeps the executables/programs or more
generally, ``binary'' files) to @file{PATH} with the following command.
+As defined below, first the existing value of @file{PATH} is used, then your
given directory is added to its end and the combined value is put back in
@file{PATH} (run `@command{$ echo $PATH}' afterwards to check if it was added).
@example
$ PATH=$PATH:~/.local/bin
@@ -6514,44 +4921,32 @@ $ PATH=$PATH:~/.local/bin
@cindex GNU Bash
@cindex Startup scripts
@cindex Scripts, startup
-Any executable that you installed in @file{~/.local/bin} will now be usable
-without having to remember and write its full address. However, as soon as
-you leave/close your current terminal session, this modified @file{PATH}
-variable will be forgotten. Adding the directories which contain
-executables to the @file{PATH} environment variable each time you start a
-terminal is also very inconvenient and prone to errors. Fortunately, there
-are standard `startup files' defined by your shell precisely for this (and
-other) purposes. There is a special startup file for every significant
-starting step:
+Any executable that you installed in @file{~/.local/bin} will now be usable
without having to remember and write its full address.
+However, as soon as you leave/close your current terminal session, this
modified @file{PATH} variable will be forgotten.
+Adding the directories which contain executables to the @file{PATH}
environment variable each time you start a terminal is also very inconvenient
and prone to errors.
+Fortunately, there are standard `startup files' defined by your shell
precisely for this (and other) purposes.
+There is a special startup file for every significant starting step:
@table @asis
@cindex GNU Bash
@item @file{/etc/profile} and everything in @file{/etc/profile.d/}
-These startup scripts are called when your whole system starts (for example
-after you turn on your computer). Therefore you need administrator or root
-privileges to access or modify them.
+These startup scripts are called when your whole system starts (for example
after you turn on your computer).
+Therefore you need administrator or root privileges to access or modify them.
@item @file{~/.bash_profile}
-If you are using (GNU) Bash as your shell, the commands in this file are
-run, when you log in to your account @emph{through Bash}. Most commonly
-when you login through the virtual console (where there is no graphic user
-interface).
+If you are using (GNU) Bash as your shell, the commands in this file are run,
when you log in to your account @emph{through Bash}.
+Most commonly when you login through the virtual console (where there is no
graphic user interface).
@item @file{~/.bashrc}
-If you are using (GNU) Bash as your shell, the commands here will be run
-each time you start a terminal and are already logged in. For example, when
-you open your terminal emulator in the graphic user interface.
+If you are using (GNU) Bash as your shell, the commands here will be run each
time you start a terminal and are already logged in.
+For example, when you open your terminal emulator in the graphic user
interface.
@end table
-For security reasons, it is highly recommended to directly type in your
-@file{HOME} directory value by hand in startup files instead of using
-variables. So in the following, let's assume your user name is
-`@file{name}' (so @file{~} may be replaced with @file{/home/name}). To add
-@file{~/.local/bin} to your @file{PATH} automatically on any startup file,
-you have to ``export'' the new value of @command{PATH} in the startup file
-that is most relevant to you by adding this line:
+For security reasons, it is highly recommended to directly type in your
@file{HOME} directory value by hand in startup files instead of using variables.
+So in the following, let's assume your user name is `@file{name}' (so @file{~}
may be replaced with @file{/home/name}).
+To add @file{~/.local/bin} to your @file{PATH} automatically on any startup
file, you have to ``export'' the new value of @command{PATH} in the startup
file that is most relevant to you by adding this line:
@example
export PATH=$PATH:/home/name/.local/bin
@@ -6560,20 +4955,9 @@ export PATH=$PATH:/home/name/.local/bin
@cindex GNU build system
@cindex Install directory
@cindex Directory, install
-Now that you know your system will look into @file{~/.local/bin} for
-executables, you can tell Gnuastro's configure script to install everything
-in the top @file{~/.local} directory using the @option{--prefix}
-option. When you subsequently run @command{$ make install}, all the
-install-able files will be put in their respective directory under
-@file{~/.local/} (the executables in @file{~/.local/bin}, the compiled
-library files in @file{~/.local/lib}, the library header files in
-@file{~/.local/include} and so on, to learn more about these different
-files, please see @ref{Review of library fundamentals}). Note that tilde
-(`@key{~}') expansion will not happen if you put a `@key{=}' between
-@option{--prefix} and @file{~/.local}@footnote{If you insist on using
-`@key{=}', you can use @option{--prefix=$HOME/.local}.}, so we have avoided
-the @key{=} character here which is optional in GNU-style options, see
-@ref{Options}.
+Now that you know your system will look into @file{~/.local/bin} for
executables, you can tell Gnuastro's configure script to install everything in
the top @file{~/.local} directory using the @option{--prefix} option.
+When you subsequently run @command{$ make install}, all the install-able files
will be put in their respective directory under @file{~/.local/} (the
executables in @file{~/.local/bin}, the compiled library files in
@file{~/.local/lib}, the library header files in @file{~/.local/include} and so
on, to learn more about these different files, please see @ref{Review of
library fundamentals}).
+Note that tilde (`@key{~}') expansion will not happen if you put a `@key{=}'
between @option{--prefix} and @file{~/.local}@footnote{If you insist on using
`@key{=}', you can use @option{--prefix=$HOME/.local}.}, so we have avoided the
@key{=} character here which is optional in GNU-style options, see
@ref{Options}.
@example
$ ./configure --prefix ~/.local
@@ -6584,17 +4968,12 @@ $ ./configure --prefix ~/.local
@cindex @file{LD_LIBRARY_PATH}
@cindex Library search directory
@cindex Default library search directory
-You can install everything (including libraries like GSL, CFITSIO, or
-WCSLIB which are Gnuastro's mandatory dependencies, see @ref{Mandatory
-dependencies}) locally by configuring them as above. However, recall that
-@command{PATH} is only for executable files, not libraries and that
-libraries can also depend on other libraries. For example WCSLIB depends on
-CFITSIO and Gnuastro needs both. Therefore, when you installed a library in
-a non-recognized directory, you have to guide the program that depends on
-them to look into the necessary library and header file directories. To do
-that, you have to define the @command{LDFLAGS} and @command{CPPFLAGS}
-environment variables respectively. This can be done while calling
-@file{./configure} as shown below:
+You can install everything (including libraries like GSL, CFITSIO, or WCSLIB
which are Gnuastro's mandatory dependencies, see @ref{Mandatory dependencies})
locally by configuring them as above.
+However, recall that @command{PATH} is only for executable files, not
libraries and that libraries can also depend on other libraries.
+For example WCSLIB depends on CFITSIO and Gnuastro needs both.
+Therefore, when you installed a library in a non-recognized directory, you
have to guide the program that depends on them to look into the necessary
library and header file directories.
+To do that, you have to define the @command{LDFLAGS} and @command{CPPFLAGS}
environment variables respectively.
+This can be done while calling @file{./configure} as shown below:
@example
$ ./configure LDFLAGS=-L/home/name/.local/lib \
@@ -6602,19 +4981,10 @@ $ ./configure LDFLAGS=-L/home/name/.local/lib
\
--prefix ~/.local
@end example
-It can be annoying/buggy to do this when configuring every software that
-depends on such libraries. Hence, you can define these two variables in the
-most relevant startup file (discussed above). The convention on using these
-variables doesn't include a colon to separate values (as
-@command{PATH}-like variables do), they use white space characters and each
-value is prefixed with a compiler option@footnote{These variables are
-ultimately used as options while building the programs, so every value has
-be an option name followed be a value as discussed in @ref{Options}.}: note
-the @option{-L} and @option{-I} above (see @ref{Options}), for @option{-I}
-see @ref{Headers}, and for @option{-L}, see @ref{Linking}. Therefore we
-have to keep the value in double quotation signs to keep the white space
-characters and adding the following two lines to the startup file of
-choice:
+It can be annoying/buggy to do this when configuring every software that
depends on such libraries.
+Hence, you can define these two variables in the most relevant startup file
(discussed above).
+The convention on using these variables doesn't include a colon to separate
values (as @command{PATH}-like variables do), they use white space characters
and each value is prefixed with a compiler option@footnote{These variables are
ultimately used as options while building the programs, so every value has be
an option name followed be a value as discussed in @ref{Options}.}: note the
@option{-L} and @option{-I} above (see @ref{Options}), for @option{-I} see
@ref{Headers}, and for @optio [...]
+Therefore we have to keep the value in double quotation signs to keep the
white space characters and adding the following two lines to the startup file
of choice:
@example
export LDFLAGS="$LDFLAGS -L/home/name/.local/lib"
@@ -6622,75 +4992,40 @@ export CPPFLAGS="$CPPFLAGS -I/home/name/.local/include"
@end example
@cindex Dynamic libraries
-Dynamic libraries are linked to the executable every time you run a program
-that depends on them (see @ref{Linking} to fully understand this important
-concept). Hence dynamic libraries also require a special path variable
-called @command{LD_LIBRARY_PATH} (same formatting as @command{PATH}). To
-use programs that depend on these libraries, you need to add
-@file{~/.local/lib} to your @command{LD_LIBRARY_PATH} environment variable
-by adding the following line to the relevant start-up file:
+Dynamic libraries are linked to the executable every time you run a program
that depends on them (see @ref{Linking} to fully understand this important
concept).
+Hence dynamic libraries also require a special path variable called
@command{LD_LIBRARY_PATH} (same formatting as @command{PATH}).
+To use programs that depend on these libraries, you need to add
@file{~/.local/lib} to your @command{LD_LIBRARY_PATH} environment variable by
adding the following line to the relevant start-up file:
@example
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/name/.local/lib
@end example
-If you also want to access the Info (see @ref{Info}) and man pages (see
-@ref{Man pages}) documentations add @file{~/.local/share/info} and
-@file{~/.local/share/man} to your @command{INFOPATH}@footnote{Info has the
-following convention: ``If the value of @command{INFOPATH} ends with a
-colon [or it isn't defined] ..., the initial list of directories is
-constructed by appending the build-time default to the value of
-@command{INFOPATH}.'' So when installing in a non-standard directory and if
-@command{INFOPATH} was not initially defined, add a colon to the end of
-@command{INFOPATH} as shown below, otherwise Info will not be able to find
-system-wide installed documentation:@*@command{echo 'export
-INFOPATH=$INFOPATH:/home/name/.local/share/info:' >> ~/.bashrc}@* Note that
-this is only an internal convention of Info, do not use it for other
-@command{*PATH} variables.} and @command{MANPATH} environment variables
-respectively.
+If you also want to access the Info (see @ref{Info}) and man pages (see
@ref{Man pages}) documentations add @file{~/.local/share/info} and
@file{~/.local/share/man} to your @command{INFOPATH}@footnote{Info has the
following convention: ``If the value of @command{INFOPATH} ends with a colon
[or it isn't defined] ..., the initial list of directories is constructed by
appending the build-time default to the value of @command{INFOPATH}.'' So when
installing in a non-standard directory and if [...]
@cindex Search directory order
@cindex Order in search directory
-A final note is that order matters in the directories that are searched for
-all the variables discussed above. In the examples above, the new directory
-was added after the system specified directories. So if the program,
-library or manuals are found in the system wide directories, the user
-directory is no longer searched. If you want to search your local
-installation first, put the new directory before the already existing list,
-like the example below.
+A final note is that order matters in the directories that are searched for
all the variables discussed above.
+In the examples above, the new directory was added after the system specified
directories.
+So if the program, library or manuals are found in the system wide
directories, the user directory is no longer searched.
+If you want to search your local installation first, put the new directory
before the already existing list, like the example below.
@example
export LD_LIBRARY_PATH=/home/name/.local/lib:$LD_LIBRARY_PATH
@end example
@noindent
-This is good when a library, for example CFITSIO, is already present on the
-system, but the system-wide install wasn't configured with the correct
-configuration flags (see @ref{CFITSIO}), or you want to use a newer version
-and you don't have administrator or root access to update it on the whole
-system/server. If you update @file{LD_LIBRARY_PATH} by placing
-@file{~/.local/lib} first (like above), the linker will first find the
-CFITSIO you installed for yourself and link with it. It thus will never
-reach the system-wide installation.
+This is good when a library, for example CFITSIO, is already present on the
system, but the system-wide install wasn't configured with the correct
configuration flags (see @ref{CFITSIO}), or you want to use a newer version and
you don't have administrator or root access to update it on the whole
system/server.
+If you update @file{LD_LIBRARY_PATH} by placing @file{~/.local/lib} first
(like above), the linker will first find the CFITSIO you installed for yourself
and link with it.
+It thus will never reach the system-wide installation.
-There are important security problems with using local installations first:
-all important system-wide executables and libraries (important executables
-like @command{ls} and @command{cp}, or libraries like the C library) can be
-replaced by non-secure versions with the same file names and put in the
-customized directory (@file{~/.local} in this example). So if you choose to
-search in your customized directory first, please @emph{be sure} to keep it
-clean from executables or libraries with the same names as important system
-programs or libraries.
+There are important security problems with using local installations first:
all important system-wide executables and libraries (important executables like
@command{ls} and @command{cp}, or libraries like the C library) can be replaced
by non-secure versions with the same file names and put in the customized
directory (@file{~/.local} in this example).
+So if you choose to search in your customized directory first, please @emph{be
sure} to keep it clean from executables or libraries with the same names as
important system programs or libraries.
@cartouche
@noindent
-@strong{Summary:} When you are using a server which doesn't give you
-administrator/root access AND you would like to give priority to your own
-built programs and libraries, not the version that is (possibly already)
-present on the server, add these lines to your startup file. See above for
-which startup file is best for your case and for a detailed explanation on
-each. Don't forget to replace `@file{/YOUR-HOME-DIR}' with your home
-directory (for example `@file{/home/your-id}'):
+@strong{Summary:} When you are using a server which doesn't give you
administrator/root access AND you would like to give priority to your own built
programs and libraries, not the version that is (possibly already) present on
the server, add these lines to your startup file.
+See above for which startup file is best for your case and for a detailed
explanation on each.
+Don't forget to replace `@file{/YOUR-HOME-DIR}' with your home directory (for
example `@file{/home/your-id}'):
@example
export PATH="/YOUR-HOME-DIR/.local/bin:$PATH"
@@ -6702,10 +5037,8 @@ export
LD_LIBRARY_PATH="/YOUR-HOME-DIR/.local/lib:$LD_LIBRARY_PATH"
@end example
@noindent
-Afterwards, you just need to add an extra
-@option{--prefix=/YOUR-HOME-DIR/.local} to the @file{./configure} command
-of the software that you intend to install. Everything else will be the
-same as a standard build and install, see @ref{Quick start}.
+Afterwards, you just need to add an extra
@option{--prefix=/YOUR-HOME-DIR/.local} to the @file{./configure} command of
the software that you intend to install.
+Everything else will be the same as a standard build and install, see
@ref{Quick start}.
@end cartouche
@node Executable names, Configure and build in RAM, Installation directory,
Configuring
@@ -6713,52 +5046,38 @@ same as a standard build and install, see @ref{Quick
start}.
@cindex Executable names
@cindex Names of executables
-At first sight, the names of the executables for each program might seem to
-be uncommonly long, for example @command{astnoisechisel} or
-@command{astcrop}. We could have chosen terse (and cryptic) names like most
-programs do. We chose this complete naming convention (something like the
-commands in @TeX{}) so you don't have to spend too much time remembering
-what the name of a specific program was. Such complete names also enable
-you to easily search for the programs.
+At first sight, the names of the executables for each program might seem to be
uncommonly long, for example @command{astnoisechisel} or @command{astcrop}.
+We could have chosen terse (and cryptic) names like most programs do.
+We chose this complete naming convention (something like the commands in
@TeX{}) so you don't have to spend too much time remembering what the name of a
specific program was.
+Such complete names also enable you to easily search for the programs.
@cindex Shell auto-complete
@cindex Auto-complete in the shell
-To facilitate typing the names in, we suggest using the shell
-auto-complete. With this facility you can find the executable you want
-very easily. It is very similar to file name completion in the
-shell. For example, simply by typing the letters below (where
-@key{[TAB]} stands for the Tab key on your keyboard)
+To facilitate typing the names in, we suggest using the shell auto-complete.
+With this facility you can find the executable you want very easily.
+It is very similar to file name completion in the shell.
+For example, simply by typing the letters below (where @key{[TAB]} stands for
the Tab key on your keyboard)
@example
$ ast[TAB][TAB]
@end example
@noindent
-you will get the list of all the available executables that start with
-@command{ast} in your @command{PATH} environment variable
-directories. So, all the Gnuastro executables installed on your system
-will be listed. Typing the next letter for the specific program you
-want along with a Tab, will limit this list until you get to your
-desired program.
+you will get the list of all the available executables that start with
@command{ast} in your @command{PATH} environment variable directories.
+So, all the Gnuastro executables installed on your system will be listed.
+Typing the next letter for the specific program you want along with a Tab,
will limit this list until you get to your desired program.
@cindex Names, customize
@cindex Customize executable names
-In case all of this does not convince you and you still want to type
-short names, some suggestions are given below. You should have in mind
-though, that if you are writing a shell script that you might want to
-pass on to others, it is best to use the standard name because other
-users might not have adopted the same customization. The long names
-also serve as a form of documentation in such scripts. A similar
-reasoning can be given for option names in scripts: it is good
-practice to always use the long formats of the options in shell
-scripts, see @ref{Options}.
+In case all of this does not convince you and you still want to type short
names, some suggestions are given below.
+You should have in mind though, that if you are writing a shell script that
you might want to pass on to others, it is best to use the standard name
because other users might not have adopted the same customization.
+The long names also serve as a form of documentation in such scripts.
+A similar reasoning can be given for option names in scripts: it is good
practice to always use the long formats of the options in shell scripts, see
@ref{Options}.
@cindex Symbolic link
-The simplest solution is making a symbolic link to the actual
-executable. For example let's assume you want to type @file{ic} to run Crop
-instead of @file{astcrop}. Assuming you installed Gnuastro executables in
-@file{/usr/local/bin} (default) you can do this simply by running the
-following command as root:
+The simplest solution is making a symbolic link to the actual executable.
+For example let's assume you want to type @file{ic} to run Crop instead of
@file{astcrop}.
+Assuming you installed Gnuastro executables in @file{/usr/local/bin} (default)
you can do this simply by running the following command as root:
@example
# ln -s /usr/local/bin/astcrop /usr/local/bin/ic
@@ -6772,32 +5091,21 @@ works.
@vindex --program-prefix
@vindex --program-suffix
@vindex --program-transform-name
-The installed executable names can also be set using options to
-@command{$ ./configure}, see @ref{Configuring}. GNU Autoconf (which
-configures Gnuastro for your particular system), allows the builder
-to change the name of programs with the three options
-@option{--program-prefix}, @option{--program-suffix} and
-@option{--program-transform-name}. The first two are for adding a
-fixed prefix or suffix to all the programs that will be installed.
-This will actually make all the names longer! You can use it to add
-versions of program names to the programs in order to simultaneously
-have two executable versions of a program.
+The installed executable names can also be set using options to @command{$
./configure}, see @ref{Configuring}.
+GNU Autoconf (which configures Gnuastro for your particular system), allows
the builder to change the name of programs with the three options
@option{--program-prefix}, @option{--program-suffix} and
@option{--program-transform-name}.
+The first two are for adding a fixed prefix or suffix to all the programs that
will be installed.
+This will actually make all the names longer! You can use it to add versions
of program names to the programs in order to simultaneously have two executable
versions of a program.
@cindex SED, stream editor
@cindex Stream editor, SED
-The third configure option allows you to set the executable name at install
-time using the SED program. SED is a very useful `stream editor'. There are
-various resources on the internet to use it effectively. However, we should
-caution that using configure options will change the actual executable name
-of the installed program and on every re-install (an update for example),
-you have to also add this option to keep the old executable name
-updated. Also note that the documentation or configuration files do not
-change from their standard names either.
+The third configure option allows you to set the executable name at install
time using the SED program.
+SED is a very useful `stream editor'.
+There are various resources on the internet to use it effectively.
+However, we should caution that using configure options will change the actual
executable name of the installed program and on every re-install (an update for
example), you have to also add this option to keep the old executable name
updated.
+Also note that the documentation or configuration files do not change from
their standard names either.
@cindex Removing @file{ast} from executables
-For example, let's assume that typing @file{ast} on every invocation
-of every program is really annoying you! You can remove this prefix
-from all the executables at configure time by adding this option:
+For example, let's assume that typing @file{ast} on every invocation of every
program is really annoying you! You can remove this prefix from all the
executables at configure time by adding this option:
@example
$ ./configure --program-transform-name='s/ast/ /'
@@ -6810,11 +5118,9 @@ $ ./configure --program-transform-name='s/ast/ /'
@cindex File I/O
@cindex Input/Output, file
-Gnuastro's configure and build process (the GNU build system) involves the
-creation, reading, and modification of a large number of files
-(input/output, or I/O). Therefore file I/O issues can directly affect the
-work of developers who need to configure and build Gnuastro numerous
-times. Some of these issues are listed below:
+Gnuastro's configure and build process (the GNU build system) involves the
creation, reading, and modification of a large number of files (input/output,
or I/O).
+Therefore file I/O issues can directly affect the work of developers who need
to configure and build Gnuastro numerous times.
+Some of these issues are listed below:
@itemize
@cindex HDD
@@ -6825,37 +5131,26 @@ SSDs (decreasing the lifetime).
@cindex Backup
@item
-Having the built files mixed with the source files can greatly affect
-backing up (synchronization) of source files (since it involves the
-management of a large number of small files that are regularly
-changed. Backup software can of course be configured to ignore the built
-files and directories. However, since the built files are mixed with the
-source files and can have a large variety, this will require a high level
-of customization.
+Having the built files mixed with the source files can greatly affect backing
up (synchronization) of source files (since it involves the management of a
large number of small files that are regularly changed.
+Backup software can of course be configured to ignore the built files and
directories.
+However, since the built files are mixed with the source files and can have a
large variety, this will require a high level of customization.
@end itemize
@cindex tmpfs file system
@cindex file systems, tmpfs
-One solution to address both these problems is to use the
-@url{https://en.wikipedia.org/wiki/Tmpfs, tmpfs file system}. Any file in
-tmpfs is actually stored in the RAM (and possibly SAWP), not on HDDs or
-SSDs. The RAM is built for extensive and fast I/O. Therefore the large
-number of file I/Os associated with configuring and building will not harm
-the HDDs or SSDs. Due to the volatile nature of RAM, files in the tmpfs
-file-system will be permanently lost after a power-off. Since all configured
-and built files are derivative files (not files that have been directly
-written by hand) there is no problem in this and this feature can be
-considered as an automatic cleanup.
+One solution to address both these problems is to use the
@url{https://en.wikipedia.org/wiki/Tmpfs, tmpfs file system}.
+Any file in tmpfs is actually stored in the RAM (and possibly SAWP), not on
HDDs or SSDs.
+The RAM is built for extensive and fast I/O.
+Therefore the large number of file I/Os associated with configuring and
building will not harm the HDDs or SSDs.
+Due to the volatile nature of RAM, files in the tmpfs file-system will be
permanently lost after a power-off.
+Since all configured and built files are derivative files (not files that have
been directly written by hand) there is no problem in this and this feature can
be considered as an automatic cleanup.
@cindex Linux kernel
@cindex GNU C library
@cindex GNU build system
-The modern GNU C library (and thus the Linux kernel) defines the
-@file{/dev/shm} directory for this purpose in the RAM (POSIX shared
-memory). To build in it, you can use the GNU build system's ability to
-build in a separate directory (not necessarily in the source directory) as
-shown below. Just set @file{SRCDIR} as the address of Gnuastro's top source
-directory (for example, the unpacked tarball).
+The modern GNU C library (and thus the Linux kernel) defines the
@file{/dev/shm} directory for this purpose in the RAM (POSIX shared memory).
+To build in it, you can use the GNU build system's ability to build in a
separate directory (not necessarily in the source directory) as shown below.
+Just set @file{SRCDIR} as the address of Gnuastro's top source directory (for
example, the unpacked tarball).
@example
$ mkdir /dev/shm/tmp-gnuastro-build
@@ -6864,144 +5159,111 @@ $ SRCDIR/configure --srcdir=SRCDIR
$ make
@end example
-Gnuastro comes with a script to simplify this process of configuring and
-building in a different directory (a ``clean'' build), for more see
-@ref{Separate build and source directories}.
+Gnuastro comes with a script to simplify this process of configuring and
building in a different directory (a ``clean'' build), for more see
@ref{Separate build and source directories}.
@node Separate build and source directories, Tests, Configuring, Build and
install
@subsection Separate build and source directories
The simple steps of @ref{Quick start} will mix the source and built files.
-This can cause inconvenience for developers or enthusiasts following the
-the most recent work (see @ref{Version controlled source}). The current
-section is mainly focused on this later group of Gnuastro users. If you
-just install Gnuastro on major releases (following @ref{Announcements}),
-you can safely ignore this section.
+This can cause inconvenience for developers or enthusiasts following the the
most recent work (see @ref{Version controlled source}).
+The current section is mainly focused on this later group of Gnuastro users.
+If you just install Gnuastro on major releases (following
@ref{Announcements}), you can safely ignore this section.
@cindex GNU build system
-When it is necessary to keep the source (which is under version control),
-but not the derivative (built) files (after checking or installing), the
-best solution is to keep the source and the built files in separate
-directories. One application of this is already discussed in @ref{Configure
-and build in RAM}.
-
-To facilitate this process of configuring and building in a separate
-directory, Gnuastro comes with the @file{developer-build} script. It is
-available in the top source directory and is @emph{not} installed. It will
-make a directory under a given top-level directory (given to
-@option{--top-build-dir}) and build Gnuastro in there directory. It thus
-keeps the source completely separated from the built files. For easy access
-to the built files, it also makes a symbolic link to the built directory in
-the top source files called @file{build}.
-
-When run without any options, default values will be used for its
-configuration. As with Gnuastro's programs, you can inspect the default
-values with @option{-P} (or @option{--printparams}, the output just looks a
-little different here). The default top-level build directory is
-@file{/dev/shm}: the shared memory directory in RAM on GNU/Linux systems as
-described in @ref{Configure and build in RAM}.
+When it is necessary to keep the source (which is under version control), but
not the derivative (built) files (after checking or installing), the best
solution is to keep the source and the built files in separate directories.
+One application of this is already discussed in @ref{Configure and build in
RAM}.
+
+To facilitate this process of configuring and building in a separate
directory, Gnuastro comes with the @file{developer-build} script.
+It is available in the top source directory and is @emph{not} installed.
+It will make a directory under a given top-level directory (given to
@option{--top-build-dir}) and build Gnuastro in there directory.
+It thus keeps the source completely separated from the built files.
+For easy access to the built files, it also makes a symbolic link to the built
directory in the top source files called @file{build}.
+
+When run without any options, default values will be used for its
configuration.
+As with Gnuastro's programs, you can inspect the default values with
@option{-P} (or @option{--printparams}, the output just looks a little
different here).
+The default top-level build directory is @file{/dev/shm}: the shared memory
directory in RAM on GNU/Linux systems as described in @ref{Configure and build
in RAM}.
@cindex Debug
-Besides these, it also has some features to facilitate the job of
-developers or bleeding edge users like the @option{--debug} option to do a
-fast build, with debug information, no optimization, and no shared
-libraries. Here is the full list of options you can feed to this script to
-configure its operations.
+Besides these, it also has some features to facilitate the job of developers
or bleeding edge users like the @option{--debug} option to do a fast build,
with debug information, no optimization, and no shared libraries.
+Here is the full list of options you can feed to this script to configure its
operations.
@cartouche
@noindent
@strong{Not all Gnuastro's common program behavior usable here:}
-@file{developer-build} is just a non-installed script with a very limited
-scope as described above. It thus doesn't have all the common option
-behaviors or configuration files for example.
+@file{developer-build} is just a non-installed script with a very limited
scope as described above.
+It thus doesn't have all the common option behaviors or configuration files
for example.
@end cartouche
@cartouche
@noindent
@strong{White space between option and value:} @file{developer-build}
-doesn't accept an @key{=} sign between the options and their values. It
-also needs at least one character between the option and its
-value. Therefore @option{-n 4} or @option{--numthreads 4} are acceptable,
-while @option{-n4}, @option{-n=4}, or @option{--numthreads=4}
-aren't. Finally multiple short option names cannot be merged: for example
-you can say @option{-c -n 4}, but unlike Gnuastro's programs, @option{-cn4}
-is not acceptable.
+doesn't accept an @key{=} sign between the options and their values.
+It also needs at least one character between the option and its value.
+Therefore @option{-n 4} or @option{--numthreads 4} are acceptable, while
@option{-n4}, @option{-n=4}, or @option{--numthreads=4} aren't.
+Finally multiple short option names cannot be merged: for example you can say
@option{-c -n 4}, but unlike Gnuastro's programs, @option{-cn4} is not
acceptable.
@end cartouche
@cartouche
@noindent
-@strong{Reusable for other packages:} This script can be used in any
-software which is configured and built using the GNU Build System. Just
-copy it in the top source directory of that software and run it from there.
+@strong{Reusable for other packages:} This script can be used in any software
which is configured and built using the GNU Build System.
+Just copy it in the top source directory of that software and run it from
there.
@end cartouche
@table @option
@item -b STR
@itemx --top-build-dir STR
-The top build directory to make a directory for the build. If this option
-isn't called, the top build directory is @file{/dev/shm} (only available in
-GNU/Linux operating systems, see @ref{Configure and build in RAM}).
+The top build directory to make a directory for the build.
+If this option isn't called, the top build directory is @file{/dev/shm} (only
available in GNU/Linux operating systems, see @ref{Configure and build in RAM}).
@item -V
@itemx --version
-Print the version string of Gnuastro that will be used in the build. This
-string will be appended to the directory name containing the built files.
+Print the version string of Gnuastro that will be used in the build.
+This string will be appended to the directory name containing the built files.
@item -a
@itemx --autoreconf
-Run @command{autoreconf -f} before building the package. In Gnuastro, this
-is necessary when a new commit has been made to the project history. In
-Gnuastro's build system, the Git description will be used as the version,
-see @ref{Version numbering} and @ref{Synchronizing}.
+Run @command{autoreconf -f} before building the package.
+In Gnuastro, this is necessary when a new commit has been made to the project
history.
+In Gnuastro's build system, the Git description will be used as the version,
see @ref{Version numbering} and @ref{Synchronizing}.
@item -c
@itemx --clean
@cindex GNU Autoreconf
-Delete the contents of the build directory (clean it) before starting the
-configuration and building of this run.
+Delete the contents of the build directory (clean it) before starting the
configuration and building of this run.
-This is useful when you have recently pulled changes from the main Git
-repository, or committed a change your self and ran @command{autoreconf -f},
-see @ref{Synchronizing}. After running GNU Autoconf, the version will be
-updated and you need to do a clean build.
+This is useful when you have recently pulled changes from the main Git
repository, or committed a change your self and ran @command{autoreconf -f},
see @ref{Synchronizing}.
+After running GNU Autoconf, the version will be updated and you need to do a
clean build.
@item -d
@itemx --debug
@cindex Valgrind
@cindex GNU Debugger (GDB)
-Build with debugging flags (for example to use in GNU Debugger, also known
-as GDB, or Valgrind), disable optimization and also the building of shared
-libraries. Similar to running the configure script of below
+Build with debugging flags (for example to use in GNU Debugger, also known as
GDB, or Valgrind), disable optimization and also the building of shared
libraries.
+Similar to running the configure script of below
@example
$ ./configure --enable-debug
@end example
-Besides all the debugging advantages of building with this option, it will
-also be significantly speed up the build (at the cost of slower built
-programs). So when you are testing something small or working on the build
-system itself, it will be much faster to test your work with this option.
+Besides all the debugging advantages of building with this option, it will
also be significantly speed up the build (at the cost of slower built programs).
+So when you are testing something small or working on the build system itself,
it will be much faster to test your work with this option.
@item -v
@itemx --valgrind
@cindex Valgrind
-Build all @command{make check} tests within Valgrind. For more, see the
-description of @option{--enable-check-with-valgrind} in @ref{Gnuastro
-configure options}.
+Build all @command{make check} tests within Valgrind.
+For more, see the description of @option{--enable-check-with-valgrind} in
@ref{Gnuastro configure options}.
@item -j INT
@itemx --jobs INT
-The maximum number of threads/jobs for Make to build at any moment. As the
-name suggests (Make has an identical option), the number given to this
-option is directly passed on to any call of Make with its @option{-j}
-option.
+The maximum number of threads/jobs for Make to build at any moment.
+As the name suggests (Make has an identical option), the number given to this
option is directly passed on to any call of Make with its @option{-j} option.
@item -C
@itemx --check
-After finishing the build, also run @command{make check}. By default,
-@command{make check} isn't run because the developer usually has their own
-checks to work on (for example defined in @file{tests/during-dev.sh}).
+After finishing the build, also run @command{make check}.
+By default, @command{make check} isn't run because the developer usually has
their own checks to work on (for example defined in @file{tests/during-dev.sh}).
@item -i
@itemx --install
@@ -7009,44 +5271,32 @@ After finishing the build, also run @command{make
install}.
@item -D
@itemx --dist
-Run @code{make dist-lzip pdf} to build a distribution tarball (in
-@file{.tar.lz} format) and a PDF manual. This can be useful for archiving,
-or sending to colleagues who don't use Git for an easy build and manual.
+Run @code{make dist-lzip pdf} to build a distribution tarball (in
@file{.tar.lz} format) and a PDF manual.
+This can be useful for archiving, or sending to colleagues who don't use Git
for an easy build and manual.
@item -u STR
@item --upload STR
-Activate the @option{--dist} (@option{-D}) option, then use secure copy
-(@command{scp}, part of the SSH tools) to copy the tarball and PDF to the
-@file{src} and @file{pdf} sub-directories of the specified server and its
-directory (value to this option). For example @command{--upload
-my-server:dir}, will copy the tarball in the @file{dir/src}, and the PDF
-manual in @file{dir/pdf} of @code{my-server} server. It will then make a
-symbolic link in the top server directory to the tarball that is called
-@file{gnuastro-latest.tar.lz}.
+Activate the @option{--dist} (@option{-D}) option, then use secure copy
(@command{scp}, part of the SSH tools) to copy the tarball and PDF to the
@file{src} and @file{pdf} sub-directories of the specified server and its
directory (value to this option).
+For example @command{--upload my-server:dir}, will copy the tarball in the
@file{dir/src}, and the PDF manual in @file{dir/pdf} of @code{my-server} server.
+It will then make a symbolic link in the top server directory to the tarball
that is called @file{gnuastro-latest.tar.lz}.
@item -p
@itemx --publish
-Short for @option{--autoreconf --clean --debug --check --upload
-STR}. @option{--debug} is added because it will greatly speed up the
-build. It will have no effect on the produced tarball. This is good when
-you have made a commit and are ready to publish it on your server (if
-nothing crashes). Recall that if any of the previous steps fail the script
-aborts.
+Short for @option{--autoreconf --clean --debug --check --upload STR}.
+@option{--debug} is added because it will greatly speed up the build.
+It will have no effect on the produced tarball.
+This is good when you have made a commit and are ready to publish it on your
server (if nothing crashes).
+Recall that if any of the previous steps fail the script aborts.
@item -I
@item --install-archive
-Short for @option{--autoreconf --clean --check --install --dist}. This is
-useful when you actually want to install the commit you just made (if the
-build and checks succeed). It will also produce a distribution tarball and
-PDF manual for easy access to the installed tarball on your system at a
-later time.
-
-Ideally, Gnuastro's Git version history makes it easy for a prepared system
-to revert back to a different point in history. But Gnuastro also needs to
-bootstrap files and also your collaborators might (usually do!) find it too
-much of a burden to do the bootstrapping themselves. So it is convenient to
-have a tarball and PDF manual of the version you have installed (and are
-using in your research) handily available.
+Short for @option{--autoreconf --clean --check --install --dist}.
+This is useful when you actually want to install the commit you just made (if
the build and checks succeed).
+It will also produce a distribution tarball and PDF manual for easy access to
the installed tarball on your system at a later time.
+
+Ideally, Gnuastro's Git version history makes it easy for a prepared system to
revert back to a different point in history.
+But Gnuastro also needs to bootstrap files and also your collaborators might
(usually do!) find it too much of a burden to do the bootstrapping themselves.
+So it is convenient to have a tarball and PDF manual of the version you have
installed (and are using in your research) handily available.
@item -h
@itemx --help
@@ -7066,35 +5316,26 @@ current values.
@cindex @file{mock.fits}
@cindex Tests, running
@cindex Checking tests
-After successfully building (compiling) the programs with the @command{$
-make} command you can check the installation before installing. To run the
-tests, run
+After successfully building (compiling) the programs with the @command{$ make}
command you can check the installation before installing.
+To run the tests, run
@example
$ make check
@end example
-For every program some tests are designed to check some possible
-operations. Running the command above will run those tests and give
-you a final report. If everything is OK and you have built all the
-programs, all the tests should pass. In case any of the tests fail,
-please have a look at @ref{Known issues} and if that still doesn't fix
-your problem, look that the @file{./tests/test-suite.log} file to see
-if the source of the error is something particular to your system or
-more general. If you feel it is general, please contact us because it
-might be a bug. Note that the tests of some programs depend on the
-outputs of other program's tests, so if you have not installed them
-they might be skipped or fail. Prior to releasing every distribution
-all these tests are checked. If you have a reasonably modern terminal,
-the outputs of the successful tests will be colored green and the
-failed ones will be colored red.
+For every program some tests are designed to check some possible operations.
+Running the command above will run those tests and give you a final report.
+If everything is OK and you have built all the programs, all the tests should
pass.
+In case any of the tests fail, please have a look at @ref{Known issues} and if
that still doesn't fix your problem, look that the
@file{./tests/test-suite.log} file to see if the source of the error is
something particular to your system or more general.
+If you feel it is general, please contact us because it might be a bug.
+Note that the tests of some programs depend on the outputs of other program's
tests, so if you have not installed them they might be skipped or fail.
+Prior to releasing every distribution all these tests are checked.
+If you have a reasonably modern terminal, the outputs of the successful tests
will be colored green and the failed ones will be colored red.
-These scripts can also act as a good set of examples for you to see how the
-programs are run. All the tests are in the @file{tests/} directory. The
-tests for each program are shell scripts (ending with @file{.sh}) in a
-sub-directory of this directory with the same name as the program. See
-@ref{Test scripts} for more detailed information about these scripts in case
-you want to inspect them.
+These scripts can also act as a good set of examples for you to see how the
programs are run.
+All the tests are in the @file{tests/} directory.
+The tests for each program are shell scripts (ending with @file{.sh}) in a
sub-directory of this directory with the same name as the program.
+See @ref{Test scripts} for more detailed information about these scripts in
case you want to inspect them.
@@ -7108,46 +5349,40 @@ you want to inspect them.
@cindex US letter paper size
@cindex Paper size, A4
@cindex Paper size, US letter
-The default print version of this book is provided in the letter paper
-size. If you would like to have the print version of this book on
-paper and you are living in a country which uses A4, then you can
-rebuild the book. The great thing about the GNU build system is that
-the book source code which is in Texinfo is also distributed with
-the program source code, enabling you to do such customization
-(hacking).
+The default print version of this book is provided in the letter paper size.
+If you would like to have the print version of this book on paper and you are
living in a country which uses A4, then you can rebuild the book.
+The great thing about the GNU build system is that the book source code which
is in Texinfo is also distributed with the program source code, enabling you to
do such customization (hacking).
@cindex GNU Texinfo
-In order to change the paper size, you will need to have GNU Texinfo
-installed. Open @file{doc/gnuastro.texi} with any text editor. This is the
-source file that created this book. In the first few lines you will see
-this line:
+In order to change the paper size, you will need to have GNU Texinfo installed.
+Open @file{doc/gnuastro.texi} with any text editor.
+This is the source file that created this book.
+In the first few lines you will see this line:
@example
@@c@@afourpaper
@end example
@noindent
-In Texinfo, a line is commented with @code{@@c}. Therefore, un-comment this
-line by deleting the first two characters such that it changes to:
+In Texinfo, a line is commented with @code{@@c}.
+Therefore, un-comment this line by deleting the first two characters such that
it changes to:
@example
@@afourpaper
@end example
@noindent
-Save the file and close it. You can now run
+Save the file and close it.
+You can now run the following command
@example
$ make pdf
@end example
@noindent
-and the new PDF book will be available in
-@file{SRCdir/doc/gnuastro.pdf}. By changing the @command{pdf} in
-@command{$ make pdf} to @command{ps} or @command{dvi} you can have the
-book in those formats. Note that you can do this for any book that
-is in Texinfo format, they might not have @code{@@afourpaper} line, so
-you can add it close to the top of the Texinfo source file.
+and the new PDF book will be available in @file{SRCdir/doc/gnuastro.pdf}.
+By changing the @command{pdf} in @command{$ make pdf} to @command{ps} or
@command{dvi} you can have the book in those formats.
+Note that you can do this for any book that is in Texinfo format, they might
not have @code{@@afourpaper} line, so you can add it close to the top of the
Texinfo source file.
@@ -7155,37 +5390,32 @@ you can add it close to the top of the Texinfo source
file.
@node Known issues, , A4 print book, Build and install
@subsection Known issues
-Depending on your operating system and the version of the compiler you
-are using, you might confront some known problems during the
-configuration (@command{$ ./configure}), compilation (@command{$
-make}) and tests (@command{$ make check}). Here, their solutions are
-discussed.
+Depending on your operating system and the version of the compiler you are
using, you might confront some known problems during the configuration
(@command{$ ./configure}), compilation (@command{$ make}) and tests (@command{$
make check}).
+Here, their solutions are discussed.
@itemize
@cindex Configuration, not finding library
@cindex Development packages
@item
-@command{$ ./configure}: @emph{Configure complains about not finding a
-library even though you have installed it.} The possible solution is
-based on how you installed the package:
+@command{$ ./configure}: @emph{Configure complains about not finding a library
even though you have installed it.}
+The possible solution is based on how you installed the package:
@itemize
@item
-From your distribution's package manager. Most probably this is
-because your distribution has separated the header files of a library
-from the library parts. Please also install the `development' packages
-for those libraries too. Just add a @file{-dev} or @file{-devel} to
-the end of the package name and re-run the package manager. This will
-not happen if you install the libraries from source. When installed
-from source, the headers are also installed.
+From your distribution's package manager.
+Most probably this is because your distribution has separated the header files
of a library from the library parts.
+Please also install the `development' packages for those libraries too.
+Just add a @file{-dev} or @file{-devel} to the end of the package name and
re-run the package manager.
+This will not happen if you install the libraries from source.
+When installed from source, the headers are also installed.
@item
@cindex @command{LDFLAGS}
-From source. Then your linker is not looking where you installed the
-library. If you followed the instructions in this chapter, all the
-libraries will be installed in @file{/usr/local/lib}. So you have to tell
-your linker to look in this directory. To do so, configure Gnuastro like
-this:
+From source.
+Then your linker is not looking where you installed the library.
+If you followed the instructions in this chapter, all the libraries will be
installed in @file{/usr/local/lib}.
+So you have to tell your linker to look in this directory.
+To do so, configure Gnuastro like this:
@example
$ ./configure LDFLAGS="-L/usr/local/lib"
@@ -7201,96 +5431,64 @@ directory}.
@vindex --enable-gnulibcheck
@cindex Gnulib: GNU Portability Library
@cindex GNU Portability Library (Gnulib)
-@command{$ make}: @emph{Complains about an unknown function on a
-non-GNU based operating system.} In this case, please run @command{$
-./configure} with the @option{--enable-gnulibcheck} option to see if
-the problem is from the GNU Portability Library (Gnulib) not
-supporting your system or if there is a problem in Gnuastro, see
-@ref{Gnuastro configure options}. If the problem is not
-in Gnulib
-and after all its tests you get the same complaint from
-@command{make}, then please contact us at
-@file{bug-gnuastro@@gnu.org}. The cause is probably that a function
-that we have used is not supported by your operating system and we
-didn't included it along with the source tar ball. If the function is
-available in Gnulib, it can be fixed immediately.
+@command{$ make}: @emph{Complains about an unknown function on a non-GNU based
operating system.}
+In this case, please run @command{$ ./configure} with the
@option{--enable-gnulibcheck} option to see if the problem is from the GNU
Portability Library (Gnulib) not supporting your system or if there is a
problem in Gnuastro, see @ref{Gnuastro configure options}.
+If the problem is not in Gnulib and after all its tests you get the same
complaint from @command{make}, then please contact us at
@file{bug-gnuastro@@gnu.org}.
+The cause is probably that a function that we have used is not supported by
your operating system and we didn't included it along with the source tar ball.
+If the function is available in Gnulib, it can be fixed immediately.
@item
@cindex @command{CPPFLAGS}
-@command{$ make}: @emph{Can't find the headers (.h files) of installed
-libraries.} Your C pre-processor (CPP) isn't looking in the right place. To
-fix this, configure Gnuastro with an additional @code{CPPFLAGS} like below
-(assuming the library is installed in @file{/usr/local/include}:
+@command{$ make}: @emph{Can't find the headers (.h files) of installed
libraries.}
+Your C pre-processor (CPP) isn't looking in the right place.
+To fix this, configure Gnuastro with an additional @code{CPPFLAGS} like below
(assuming the library is installed in @file{/usr/local/include}:
@example
$ ./configure CPPFLAGS="-I/usr/local/include"
@end example
-If you want to use the libraries for your other programming projects, then
-export this environment variable in a start-up script similar to the case
-for @file{LD_LIBRARY_PATH} explained below, also see @ref{Installation
-directory}.
+If you want to use the libraries for your other programming projects, then
export this environment variable in a start-up script similar to the case for
@file{LD_LIBRARY_PATH} explained below, also see @ref{Installation directory}.
@cindex Tests, only one passes
@cindex @file{LD_LIBRARY_PATH}
@item
-@command{$ make check}: @emph{Only the first couple of tests pass, all the
-rest fail or get skipped.} It is highly likely that when searching for
-shared libraries, your system doesn't look into the @file{/usr/local/lib}
-directory (or wherever you installed Gnuastro or its dependencies). To make
-sure it is added to the list of directories, add the following line to your
-@file{~/.bashrc} file and restart your terminal. Don't forget to change
-@file{/usr/local/lib} if the libraries are installed in other
-(non-standard) directories.
+@command{$ make check}: @emph{Only the first couple of tests pass, all the
rest fail or get skipped.} It is highly likely that when searching for shared
libraries, your system doesn't look into the @file{/usr/local/lib} directory
(or wherever you installed Gnuastro or its dependencies).
+To make sure it is added to the list of directories, add the following line to
your @file{~/.bashrc} file and restart your terminal.
+Don't forget to change @file{/usr/local/lib} if the libraries are installed in
other (non-standard) directories.
@example
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/lib"
@end example
-You can also add more directories by using a colon `@code{:}' to separate
-them. See @ref{Installation directory} and @ref{Linking} to learn more on
-the @code{PATH} variables and dynamic linking respectively.
+You can also add more directories by using a colon `@code{:}' to separate them.
+See @ref{Installation directory} and @ref{Linking} to learn more on the
@code{PATH} variables and dynamic linking respectively.
@cindex GPL Ghostscript
@item
-@command{$ make check}: @emph{The tests relying on external programs
-(for example @file{fitstopdf.sh} fail.}) This is probably due to the
-fact that the version number of the external programs is too old for
-the tests we have preformed. Please update the program to a more
-recent version. For example to create a PDF image, you will need GPL
-Ghostscript, but older versions do not work, we have successfully
-tested it on version 9.15. Older versions might cause a failure in the
-test result.
+@command{$ make check}: @emph{The tests relying on external programs (for
example @file{fitstopdf.sh} fail.}) This is probably due to the fact that the
version number of the external programs is too old for the tests we have
preformed.
+Please update the program to a more recent version.
+For example to create a PDF image, you will need GPL Ghostscript, but older
versions do not work, we have successfully tested it on version 9.15.
+Older versions might cause a failure in the test result.
@item
@cindex @TeX{}
@cindex GNU Texinfo
-@command{$ make pdf}: @emph{The PDF book cannot be made.} To make a
-PDF book, you need to have the GNU Texinfo program (like any
-program, the more recent the better). A working @TeX{} program is also
-necessary, which you can get from Tex
-Live@footnote{@url{https://www.tug.org/texlive/}}.
+@command{$ make pdf}: @emph{The PDF book cannot be made.}
+To make a PDF book, you need to have the GNU Texinfo program (like any
program, the more recent the better).
+A working @TeX{} program is also necessary, which you can get from Tex
Live@footnote{@url{https://www.tug.org/texlive/}}.
@item
@cindex GNU Libtool
-After @code{make check}: do not copy the programs' executables to another
-(for example, the installation) directory manually (using @command{cp}, or
-@command{mv} for example). In the default configuration@footnote{If you
-configure Gnuastro with the @option{--disable-shared} option, then the
-libraries will be statically linked to the programs and this problem won't
-exist, see @ref{Linking}.}, the program binaries need to link with
-Gnuastro's shared library which is also built and installed with the
-programs. Therefore, to run successfully before and after installation,
-linking modifications need to be made by GNU Libtool at installation
-time. @command{make install} does this internally, but a simple copy might
-give linking errors when you run it. If you need to copy the executables,
-you can do so after installation.
+After @code{make check}: do not copy the programs' executables to another (for
example, the installation) directory manually (using @command{cp}, or
@command{mv} for example).
+In the default configuration@footnote{If you configure Gnuastro with the
@option{--disable-shared} option, then the libraries will be statically linked
to the programs and this problem won't exist, see @ref{Linking}.}, the program
binaries need to link with Gnuastro's shared library which is also built and
installed with the programs.
+Therefore, to run successfully before and after installation, linking
modifications need to be made by GNU Libtool at installation time.
+@command{make install} does this internally, but a simple copy might give
linking errors when you run it.
+If you need to copy the executables, you can do so after installation.
@end itemize
@noindent
-If your problem was not listed above, please file a bug report
-(@ref{Report a bug}).
+If your problem was not listed above, please file a bug report (@ref{Report a
bug}).
@@ -7313,52 +5511,27 @@ If your problem was not listed above, please file a bug
report
@node Common program behavior, Data containers, Installation, Top
@chapter Common program behavior
-All the programs in Gnuastro share a set of common behavior mainly to do
-with user interaction to facilitate their usage and development. This
-includes how to feed input datasets into the programs, how to configure
-them, specifying the outputs, numerical data types, treating columns of
-information in tables and etc. This chapter is devoted to describing this
-common behavior in all programs. Because the behaviors discussed here are
-common to several programs, they are not repeated in each program's
-description.
-
-In @ref{Command-line}, a very general description of running the programs
-on the command-line is discussed, like difference between arguments and
-options, as well as options that are common/shared between all
-programs. None of Gnuastro's programs keep any internal configuration value
-(values for their different operational steps), they read their
-configuration primarily from the command-line, then from specific files in
-directory, user, or system-wide settings. Using these configuration files
-can greatly help reproducible and robust usage of Gnuastro, see
-@ref{Configuration files} for more.
-
-It is not possible to always have the different options and configurations
-of each program on the top of your head. It is very natural to forget the
-options of a program, their current default values, or how it should be run
-and what it did. Gnuastro's programs have multiple ways to help you refresh
-your memory in multiple levels (just an option name, a short description,
-or fast access to the relevant section of the manual. See @ref{Getting
-help} for more for more on benefiting from this very convenient feature.
-
-Many of the programs use the multi-threaded character of modern CPUs, in
-@ref{Multi-threaded operations} we'll discuss how you can configure this
-behavior, along with some tips on making best use of them. In @ref{Numeric
-data types}, we'll review the various types to store numbers in your
-datasets: setting the proper type for the usage context@footnote{For
-example if the values in your dataset can only be integers between 0 or
-65000, store them in a unsigned 16-bit type, not 64-bit floating point type
-(which is the default in most systems). It takes four times less space and
-is much faster to process.} can greatly improve the file size and also speed
-of reading, writing or processing them.
-
-We'll then look into the recognized table formats in @ref{Tables} and how
-large datasets are broken into tiles, or mesh grid in
-@ref{Tessellation}. Finally, we'll take a look at the behavior regarding
-output files: @ref{Automatic output} describes how the programs set a
-default name for their output when you don't give one explicitly (using
-@option{--output}). When the output is a FITS file, all the programs also
-store some very useful information in the header that is discussed in
-@ref{Output FITS files}.
+All the programs in Gnuastro share a set of common behavior mainly to do with
user interaction to facilitate their usage and development.
+This includes how to feed input datasets into the programs, how to configure
them, specifying the outputs, numerical data types, treating columns of
information in tables, etc.
+This chapter is devoted to describing this common behavior in all programs.
+Because the behaviors discussed here are common to several programs, they are
not repeated in each program's description.
+
+In @ref{Command-line}, a very general description of running the programs on
the command-line is discussed, like difference between arguments and options,
as well as options that are common/shared between all programs.
+None of Gnuastro's programs keep any internal configuration value (values for
their different operational steps), they read their configuration primarily
from the command-line, then from specific files in directory, user, or
system-wide settings.
+Using these configuration files can greatly help reproducible and robust usage
of Gnuastro, see @ref{Configuration files} for more.
+
+It is not possible to always have the different options and configurations of
each program on the top of your head.
+It is very natural to forget the options of a program, their current default
values, or how it should be run and what it did.
+Gnuastro's programs have multiple ways to help you refresh your memory in
multiple levels (just an option name, a short description, or fast access to
the relevant section of the manual.
+See @ref{Getting help} for more for more on benefiting from this very
convenient feature.
+
+Many of the programs use the multi-threaded character of modern CPUs, in
@ref{Multi-threaded operations} we'll discuss how you can configure this
behavior, along with some tips on making best use of them.
+In @ref{Numeric data types}, we'll review the various types to store numbers
in your datasets: setting the proper type for the usage context@footnote{For
example if the values in your dataset can only be integers between 0 or 65000,
store them in a unsigned 16-bit type, not 64-bit floating point type (which is
the default in most systems).
+It takes four times less space and is much faster to process.} can greatly
improve the file size and also speed of reading, writing or processing them.
+
+We'll then look into the recognized table formats in @ref{Tables} and how
large datasets are broken into tiles, or mesh grid in @ref{Tessellation}.
+Finally, we'll take a look at the behavior regarding output files:
@ref{Automatic output} describes how the programs set a default name for their
output when you don't give one explicitly (using @option{--output}).
+When the output is a FITS file, all the programs also store some very useful
information in the header that is discussed in @ref{Output FITS files}.
@menu
* Command-line:: How to use the command-line.
@@ -7376,13 +5549,10 @@ store some very useful information in the header that
is discussed in
@node Command-line, Configuration files, Common program behavior, Common
program behavior
@section Command-line
-Gnuastro's programs are customized through the standard Unix-like
-command-line environment and GNU style command-line options. Both are very
-common in many Unix-like operating system programs. In @ref{Arguments and
-options} we'll start with the difference between arguments and options and
-elaborate on the GNU style of options. Afterwards, in @ref{Common options},
-we'll go into the detailed list of all the options that are common to all
-the programs in Gnuastro.
+Gnuastro's programs are customized through the standard Unix-like command-line
environment and GNU style command-line options.
+Both are very common in many Unix-like operating system programs.
+In @ref{Arguments and options} we'll start with the difference between
arguments and options and elaborate on the GNU style of options.
+Afterwards, in @ref{Common options}, we'll go into the detailed list of all
the options that are common to all the programs in Gnuastro.
@menu
* Arguments and options:: Different ways to specify inputs and
configuration.
@@ -7398,73 +5568,47 @@ the programs in Gnuastro.
@cindex Command-line options
@cindex Arguments to programs
@cindex Command-line arguments
-When you type a command on the command-line, it is passed onto the shell (a
-generic name for the program that manages the command-line) as a string of
-characters. As an example, see the ``Invoking ProgramName'' sections in
-this manual for some examples of commands with each program, like
-@ref{Invoking asttable}, @ref{Invoking astfits}, or @ref{Invoking
-aststatistics}.
-
-The shell then brakes up your string into separate @emph{tokens} or
-@emph{words} using any @emph{metacharacters} (like white-space, tab,
-@command{|}, @command{>} or @command{;}) that are in the string. On the
-command-line, the first thing you usually enter is the name of the program
-you want to run. After that, you can specify two types of tokens:
-@emph{arguments} and @emph{options}. In the GNU-style, arguments are those
-tokens that are not preceded by any hyphens (@command{-}, see
-@ref{Arguments}). Here is one example:
+When you type a command on the command-line, it is passed onto the shell (a
generic name for the program that manages the command-line) as a string of
characters.
+As an example, see the ``Invoking ProgramName'' sections in this manual for
some examples of commands with each program, like @ref{Invoking asttable},
@ref{Invoking astfits}, or @ref{Invoking aststatistics}.
+
+The shell then brakes up your string into separate @emph{tokens} or
@emph{words} using any @emph{metacharacters} (like white-space, tab,
@command{|}, @command{>} or @command{;}) that are in the string.
+On the command-line, the first thing you usually enter is the name of the
program you want to run.
+After that, you can specify two types of tokens: @emph{arguments} and
@emph{options}.
+In the GNU-style, arguments are those tokens that are not preceded by any
hyphens (@command{-}, see @ref{Arguments}).
+Here is one example:
@example
$ astcrop --center=53.162551,-27.789676 -w10/3600 --mode=wcs udf.fits
@end example
-In the example above, we are running @ref{Crop} to crop a region of width
-10 arc-seconds centered at the given RA and Dec from the input Hubble
-Ultra-Deep Field (UDF) FITS image. Here, the argument is
-@file{udf.fits}. Arguments are most commonly the input file names
-containing your data. Options start with one or two hyphens, followed by an
-identifier for the option (the option's name, for example,
-@option{--center}, @option{-w}, @option{--mode} in the example above) and
-its value (anything after the option name, or the optional @key{=}
-character). Through options you can configure how the program runs
-(interprets the data you provided).
+In the example above, we are running @ref{Crop} to crop a region of width 10
arc-seconds centered at the given RA and Dec from the input Hubble Ultra-Deep
Field (UDF) FITS image.
+Here, the argument is @file{udf.fits}.
+Arguments are most commonly the input file names containing your data.
+Options start with one or two hyphens, followed by an identifier for the
option (the option's name, for example, @option{--center}, @option{-w},
@option{--mode} in the example above) and its value (anything after the option
name, or the optional @key{=} character).
+Through options you can configure how the program runs (interprets the data
you provided).
@vindex --help
@vindex --usage
@cindex Mandatory arguments
-Arguments can be mandatory and optional and unlike options, they don't have
-any identifiers. Hence, when there multiple arguments, their order might
-also matter (for example in @command{cp} which is used for copying one file
-to another location). The outputs of @option{--usage} and @option{--help}
-shows which arguments are optional and which are mandatory, see
-@ref{--usage}.
-
-As their name suggests, @emph{options} can be considered to be optional and
-most of the time, you don't have to worry about what order you specify them
-in. When the order does matter, or the option can be invoked multiple
-times, it is explicitly mentioned in the ``Invoking ProgramName'' section
-of each program (this is a very important aspect of an option).
-
-@cindex Metacharacters on the command-line
-In case your arguments or option values contain any of the shell's
-meta-characters, you have to quote them. If there is only one such
-character, you can use a backslash (@command{\}) before it. If there are
-multiple, it might be easier to simply put your whole argument or option
-value inside of double quotes (@command{"}). In such cases, everything
-inside the double quotes will be seen as one token or word.
+Arguments can be mandatory and optional and unlike options, they don't have
any identifiers.
+Hence, when there multiple arguments, their order might also matter (for
example in @command{cp} which is used for copying one file to another location).
+The outputs of @option{--usage} and @option{--help} shows which arguments are
optional and which are mandatory, see @ref{--usage}.
+
+As their name suggests, @emph{options} can be considered to be optional and
most of the time, you don't have to worry about what order you specify them in.
+When the order does matter, or the option can be invoked multiple times, it is
explicitly mentioned in the ``Invoking ProgramName'' section of each program
(this is a very important aspect of an option).
+
+@cindex Metacharacters on the command-line In case your arguments or option
values contain any of the shell's meta-characters, you have to quote them.
+If there is only one such character, you can use a backslash (@command{\})
before it.
+If there are multiple, it might be easier to simply put your whole argument or
option value inside of double quotes (@command{"}).
+In such cases, everything inside the double quotes will be seen as one token
or word.
@cindex HDU
@cindex Header data unit
-For example, let's say you want to specify the header data unit (HDU) of
-your FITS file using a complex expression like `@command{3; images(exposure
-> 100)}'. If you simply add these after the @option{--hdu} (@option{-h})
-option, the programs in Gnuastro will read the value to the HDU option as
-`@command{3}' and run. Then, the shell will attempt to run a separate
-command `@command{images(exposure > 100)}' and complain about a syntax
-error. This is because the semicolon (@command{;}) is an `end of command'
-character in the shell. To solve this problem you can simply put double
-quotes around the whole string you want to pass to @option{--hdu} as seen
-below:
+For example, let's say you want to specify the header data unit (HDU) of your
FITS file using a complex expression like `@command{3; images(exposure > 100)}'.
+If you simply add these after the @option{--hdu} (@option{-h}) option, the
programs in Gnuastro will read the value to the HDU option as `@command{3}' and
run.
+Then, the shell will attempt to run a separate command
`@command{images(exposure > 100)}' and complain about a syntax error.
+This is because the semicolon (@command{;}) is an `end of command' character
in the shell.
+To solve this problem you can simply put double quotes around the whole string
you want to pass to @option{--hdu} as seen below:
@example
$ astcrop --hdu="3; images(exposure > 100)" image.fits
@end example
@@ -7480,21 +5624,14 @@ $ astcrop --hdu="3; images(exposure > 100)" image.fits
@node Arguments, Options, Arguments and options, Arguments and options
@subsubsection Arguments
-In Gnuastro, arguments are almost exclusively used as the input data file
-names. Please consult the first few paragraph of the ``Invoking
-ProgramName'' section for each program for a description of what it expects
-as input, how many arguments, or input data, it accepts, or in what
-order. Everything particular about how a program treats arguments, is
-explained under the ``Invoking ProgramName'' section for that program.
-
-Generally, if there is a standard file name extension for a particular
-format, that filename extension is used to separate the kinds of
-arguments. The list below shows the data formats that are recognized in
-Gnuastro's programs based on their file name endings. Any argument that
-doesn't end with the specified extensions below is considered to be a text
-file (usually catalogs, see @ref{Tables}). In some cases, a program
-can accept specific formats, for example @ref{ConvertType} also accepts
-@file{.jpg} images.
+In Gnuastro, arguments are almost exclusively used as the input data file
names.
+Please consult the first few paragraph of the ``Invoking ProgramName'' section
for each program for a description of what it expects as input, how many
arguments, or input data, it accepts, or in what order.
+Everything particular about how a program treats arguments, is explained under
the ``Invoking ProgramName'' section for that program.
+
+Generally, if there is a standard file name extension for a particular format,
that filename extension is used to separate the kinds of arguments.
+The list below shows the data formats that are recognized in Gnuastro's
programs based on their file name endings.
+Any argument that doesn't end with the specified extensions below is
considered to be a text file (usually catalogs, see @ref{Tables}).
+In some cases, a program can accept specific formats, for example
@ref{ConvertType} also accepts @file{.jpg} images.
@cindex Astronomical data suffixes
@cindex Suffixes, astronomical data
@@ -7520,14 +5657,9 @@ can accept specific formats, for example
@ref{ConvertType} also accepts
@end itemize
-Through out this book and in the command-line outputs, whenever we
-want to generalize all such astronomical data formats in a text place
-holder, we will use @file{ASTRdata}, we will assume that the extension
-is also part of this name. Any file ending with these names is
-directly passed on to CFITSIO to read. Therefore you don't necessarily
-have to have these files on your computer, they can also be located on
-an FTP or HTTP server too, see the CFITSIO manual for more
-information.
+Through out this book and in the command-line outputs, whenever we want to
generalize all such astronomical data formats in a text place holder, we will
use @file{ASTRdata}, we will assume that the extension is also part of this
name.
+Any file ending with these names is directly passed on to CFITSIO to read.
+Therefore you don't necessarily have to have these files on your computer,
they can also be located on an FTP or HTTP server too, see the CFITSIO manual
for more information.
CFITSIO has its own error reporting techniques, if your input file(s)
cannot be opened, or read, those errors will be printed prior to the
@@ -7541,35 +5673,26 @@ final error by Gnuastro.
@cindex GNU style options
@cindex Options, GNU style
@cindex Options, short (@option{-}) and long (@option{--})
-Command-line options allow configuring the behavior of a program in all
-GNU/Linux applications for each particular execution on a particular input
-data. A single option can be called in two ways: @emph{long} or
-@emph{short}. All options in Gnuastro accept the long format which has two
-hyphens an can have many characters (for example @option{--hdu}). Short
-options only have one hyphen (@key{-}) followed by one character (for
-example @option{-h}). You can see some examples in the list of options in
-@ref{Common options} or those for each program's ``Invoking ProgramName''
-section. Both formats are shown for those which support both. First the
-short is shown then the long.
-
-Usually, the short options are for when you are writing on the command-line
-and want to save keystrokes and time. The long options are good for shell
-scripts, where you aren't usually rushing. Long options provide a level of
-documentation, since they are more descriptive and less cryptic. Usually
-after a few months of not running a program, the short options will be
-forgotten and reading your previously written script will not be easy.
+Command-line options allow configuring the behavior of a program in all
GNU/Linux applications for each particular execution on a particular input data.
+A single option can be called in two ways: @emph{long} or @emph{short}.
+All options in Gnuastro accept the long format which has two hyphens an can
have many characters (for example @option{--hdu}).
+Short options only have one hyphen (@key{-}) followed by one character (for
example @option{-h}).
+You can see some examples in the list of options in @ref{Common options} or
those for each program's ``Invoking ProgramName'' section.
+Both formats are shown for those which support both.
+First the short is shown then the long.
+
+Usually, the short options are for when you are writing on the command-line
and want to save keystrokes and time.
+The long options are good for shell scripts, where you aren't usually rushing.
+Long options provide a level of documentation, since they are more descriptive
and less cryptic.
+Usually after a few months of not running a program, the short options will be
forgotten and reading your previously written script will not be easy.
@cindex On/Off options
@cindex Options, on/off
-Some options need to be given a value if they are called and some
-don't. You can think of the latter type of options as on/off options. These
-two types of options can be distinguished using the output of the
-@option{--help} and @option{--usage} options, which are common to all GNU
-software, see @ref{Getting help}. In Gnuastro we use the following strings
-to specify when the option needs a value and what format that value should
-be in. More specific tests will be done in the program and if the values
-are out of range (for example negative when the program only wants a
-positive value), an error will be reported.
+Some options need to be given a value if they are called and some don't.
+You can think of the latter type of options as on/off options.
+These two types of options can be distinguished using the output of the
@option{--help} and @option{--usage} options, which are common to all GNU
software, see @ref{Getting help}.
+In Gnuastro we use the following strings to specify when the option needs a
value and what format that value should be in.
+More specific tests will be done in the program and if the values are out of
range (for example negative when the program only wants a positive value), an
error will be reported.
@vtable @option
@@ -7577,9 +5700,9 @@ positive value), an error will be reported.
The value is read as an integer.
@item FLT
-The value is read as a float. There are generally two types, depending
-on the context. If they are for fractions, they will have to be less
-than or equal to unity.
+The value is read as a float.
+There are generally two types, depending on the context.
+If they are for fractions, they will have to be less than or equal to unity.
@item STR
The value is read as a string of characters (for example a file name)
@@ -7590,96 +5713,66 @@ or other particular settings like a HDU name, see below.
@noindent
@cindex Values to options
@cindex Option values
-To specify a value in the short format, simply put the value after the
-option. Note that since the short options are only one character long,
-you don't have to type anything between the option and its value. For
-the long option you either need white space or an @option{=} sign, for
-example @option{-h2}, @option{-h 2}, @option{--hdu 2} or
-@option{--hdu=2} are all equivalent.
-
-The short format of on/off options (those that don't need values) can be
-concatenated for example these two hypothetical sequences of options are
-equivalent: @option{-a -b -c4} and @option{-abc4}. As an example, consider
-the following command to run Crop:
+To specify a value in the short format, simply put the value after the option.
+Note that since the short options are only one character long, you don't have
to type anything between the option and its value.
+For the long option you either need white space or an @option{=} sign, for
example @option{-h2}, @option{-h 2}, @option{--hdu 2} or @option{--hdu=2} are
all equivalent.
+
+The short format of on/off options (those that don't need values) can be
concatenated for example these two hypothetical sequences of options are
equivalent: @option{-a -b -c4} and @option{-abc4}.
+As an example, consider the following command to run Crop:
@example
$ astcrop -Dr3 --wwidth 3 catalog.txt --deccol=4 ASTRdata
@end example
@noindent
-The @command{$} is the shell prompt, @command{astcrop} is the
-program name. There are two arguments (@command{catalog.txt} and
-@command{ASTRdata}) and four options, two of them given in short
-format (@option{-D}, @option{-r}) and two in long format
-(@option{--width} and @option{--deccol}). Three of them require a
-value and one (@option{-D}) is an on/off option.
+The @command{$} is the shell prompt, @command{astcrop} is the program name.
+There are two arguments (@command{catalog.txt} and @command{ASTRdata}) and
four options, two of them given in short format (@option{-D}, @option{-r}) and
two in long format (@option{--width} and @option{--deccol}).
+Three of them require a value and one (@option{-D}) is an on/off option.
@vindex --printparams
@cindex Options, abbreviation
@cindex Long option abbreviation
-If an abbreviation is unique between all the options of a program, the
-long option names can be abbreviated. For example, instead of typing
-@option{--printparams}, typing @option{--print} or maybe even
-@option{--pri} will be enough, if there are conflicts, the program
-will warn you and show you the alternatives. Finally, if you want the
-argument parser to stop parsing arguments beyond a certain point, you
-can use two dashes: @option{--}. No text on the command-line beyond
-these two dashes will be parsed.
+If an abbreviation is unique between all the options of a program, the long
option names can be abbreviated.
+For example, instead of typing @option{--printparams}, typing @option{--print}
or maybe even @option{--pri} will be enough, if there are conflicts, the
program will warn you and show you the alternatives.
+Finally, if you want the argument parser to stop parsing arguments beyond a
certain point, you can use two dashes: @option{--}.
+No text on the command-line beyond these two dashes will be parsed.
@cindex Repeated options
@cindex Options, repeated
-Gnuastro has two types of options with values, those that only take a
-single value are the most common type. If these options are repeated or
-called more than once on the command-line, the value of the last time it
-was called will be assigned to it. This is very useful when you are
-testing/experimenting. Let's say you want to make a small modification to
-one option value. You can simply type the option with a new value in the
-end of the command and see how the script works. If you are satisfied with
-the change, you can remove the original option for human readability. If
-the change wasn't satisfactory, you can remove the one you just added and
-not worry about forgetting the original value. Without this capability, you
-would have to memorize or save the original value somewhere else, run the
-command and then change the value again which is not at all convenient and
-is potentially cause lots of bugs.
-
-On the other hand, some options can be called multiple times in one run of
-a program and can thus take multiple values (for example see the
-@option{--column} option in @ref{Invoking asttable}. In these cases, the
-order of stored values is the same order that you specified on the
-command-line.
+Gnuastro has two types of options with values, those that only take a single
value are the most common type.
+If these options are repeated or called more than once on the command-line,
the value of the last time it was called will be assigned to it.
+This is very useful when you are testing/experimenting.
+Let's say you want to make a small modification to one option value.
+You can simply type the option with a new value in the end of the command and
see how the script works.
+If you are satisfied with the change, you can remove the original option for
human readability.
+If the change wasn't satisfactory, you can remove the one you just added and
not worry about forgetting the original value.
+Without this capability, you would have to memorize or save the original value
somewhere else, run the command and then change the value again which is not at
all convenient and is potentially cause lots of bugs.
+
+On the other hand, some options can be called multiple times in one run of a
program and can thus take multiple values (for example see the
@option{--column} option in @ref{Invoking asttable}.
+In these cases, the order of stored values is the same order that you
specified on the command-line.
@cindex Configuration files
@cindex Default option values
-Gnuastro's programs don't keep any internal default values, so some options
-are mandatory and if they don't have a value, the program will complain and
-abort. Most programs have many such options and typing them by hand on
-every call is impractical. To facilitate the user experience, after parsing
-the command-line, Gnuastro's programs read special configuration files to
-get the necessary values for the options you haven't identified on the
-command-line. These configuration files are fully described in
-@ref{Configuration files}.
+Gnuastro's programs don't keep any internal default values, so some options
are mandatory and if they don't have a value, the program will complain and
abort.
+Most programs have many such options and typing them by hand on every call is
impractical.
+To facilitate the user experience, after parsing the command-line, Gnuastro's
programs read special configuration files to get the necessary values for the
options you haven't identified on the command-line.
+These configuration files are fully described in @ref{Configuration files}.
@cartouche
@noindent
@cindex Tilde expansion as option values
-@strong{CAUTION:} In specifying a file address, if you want to use the
-shell's tilde expansion (@command{~}) to specify your home directory,
-leave at least one space between the option name and your value. For
-example use @command{-o ~/test}, @command{--output ~/test} or
-@command{--output= ~/test}. Calling them with @command{-o~/test} or
-@command{--output=~/test} will disable shell expansion.
+@strong{CAUTION:} In specifying a file address, if you want to use the shell's
tilde expansion (@command{~}) to specify your home directory, leave at least
one space between the option name and your value.
+For example use @command{-o ~/test}, @command{--output ~/test} or
@command{--output= ~/test}.
+Calling them with @command{-o~/test} or @command{--output=~/test} will disable
shell expansion.
@end cartouche
@cartouche
@noindent
-@strong{CAUTION:} If you forget to specify a value for an option which
-requires one, and that option is the last one, Gnuastro will warn you. But
-if it is in the middle of the command, it will take the text of the next
-option or argument as the value which can cause undefined behavior.
+@strong{CAUTION:} If you forget to specify a value for an option which
requires one, and that option is the last one, Gnuastro will warn you.
+But if it is in the middle of the command, it will take the text of the next
option or argument as the value which can cause undefined behavior.
@end cartouche
@cartouche
@noindent
@cindex Counting from zero.
-@strong{NOTE:} In some contexts Gnuastro's counting starts from 0 and in
-others 1. You can assume by default that counting starts from 1, if it
-starts from 0 for a special option, it will be explicitly mentioned.
+@strong{NOTE:} In some contexts Gnuastro's counting starts from 0 and in
others 1.
+You can assume by default that counting starts from 1, if it starts from 0 for
a special option, it will be explicitly mentioned.
@end cartouche
@node Common options, Standard input, Arguments and options, Command-line
@@ -7687,14 +5780,10 @@ starts from 0 for a special option, it will be
explicitly mentioned.
@cindex Options common to all programs
@cindex Gnuastro common options
-To facilitate the job of the users and developers, all the programs in
-Gnuastro share some basic command-line options for the options that are
-common to many of the programs. The full list is classified as @ref{Input
-output options}, @ref{Processing options}, and @ref{Operating mode
-options}. In some programs, some of the options are irrelevant, but still
-recognized (you won't get an unrecognized option error, but the value isn't
-used). Unless otherwise mentioned, these options are identical between all
-programs.
+To facilitate the job of the users and developers, all the programs in
Gnuastro share some basic command-line options for the options that are common
to many of the programs.
+The full list is classified as @ref{Input output options}, @ref{Processing
options}, and @ref{Operating mode options}.
+In some programs, some of the options are irrelevant, but still recognized
(you won't get an unrecognized option error, but the value isn't used).
+Unless otherwise mentioned, these options are identical between all programs.
@menu
* Input output options:: Common input/output options.
@@ -7713,110 +5802,78 @@ programs.
@cindex Timeout
@cindex Standard input
@item --stdintimeout
-Number of micro-seconds to wait for writing/typing in the @emph{first line}
-of standard input from the command-line (see @ref{Standard input}). This is
-only relevant for programs that also accept input from the standard input,
-@emph{and} you want to manually write/type the contents on the
-terminal. When the standard input is already connected to a pipe (output of
-another program), there won't be any waiting (hence no timeout, thus making
-this option redundant).
-
-If the first line-break (for example with the @key{ENTER} key) is not
-provided before the timeout, the program will abort with an error that no
-input was given. Note that this time interval is @emph{only} for the first
-line that you type. Once the first line is given, the program will assume
-that more data will come and accept rest of your inputs without any time
-limit. You need to specify the ending of the standard input, for example by
-pressing @key{CTRL-D} after a new line.
-
-Note that any input you write/type into a program on the command-line with
-Standard input will be discarded (lost) once the program is finished. It is
-only recoverable manually from your command-line (where you actually typed)
-as long as the terminal is open. So only use this feature when you are sure
-that you don't need the dataset (or have a copy of it somewhere else).
+Number of micro-seconds to wait for writing/typing in the @emph{first line} of
standard input from the command-line (see @ref{Standard input}).
+This is only relevant for programs that also accept input from the standard
input, @emph{and} you want to manually write/type the contents on the terminal.
+When the standard input is already connected to a pipe (output of another
program), there won't be any waiting (hence no timeout, thus making this option
redundant).
+
+If the first line-break (for example with the @key{ENTER} key) is not provided
before the timeout, the program will abort with an error that no input was
given.
+Note that this time interval is @emph{only} for the first line that you type.
+Once the first line is given, the program will assume that more data will come
and accept rest of your inputs without any time limit.
+You need to specify the ending of the standard input, for example by pressing
@key{CTRL-D} after a new line.
+
+Note that any input you write/type into a program on the command-line with
Standard input will be discarded (lost) once the program is finished.
+It is only recoverable manually from your command-line (where you actually
typed) as long as the terminal is open.
+So only use this feature when you are sure that you don't need the dataset (or
have a copy of it somewhere else).
@cindex HDU
@cindex Header data unit
@item -h STR/INT
@itemx --hdu=STR/INT
-The name or number of the desired Header Data Unit, or HDU, in the FITS
-image. A FITS file can store multiple HDUs or extensions, each with either
-an image or a table or nothing at all (only a header). Note that counting
-of the extensions starts from 0(zero), not 1(one). Counting from 0 is
-forced on us by CFITSIO which directly reads the value you give with this
-option (see @ref{CFITSIO}). When specifying the name, case is not important
-so @command{IMAGE}, @command{image} or @command{ImAgE} are equivalent.
-
-CFITSIO has many capabilities to help you find the extension you want, far
-beyond the simple extension number and name. See CFITSIO manual's ``HDU
-Location Specification'' section for a very complete explanation with
-several examples. A @code{#} is appended to the string you specify for the
-HDU@footnote{With the @code{#} character, CFITSIO will only read the
-desired HDU into your memory, not all the existing HDUs in the fits file.}
-and the result is put in square brackets and appended to the FITS file name
-before calling CFITSIO to read the contents of the HDU for all the programs
-in Gnuastro.
+The name or number of the desired Header Data Unit, or HDU, in the FITS image.
+A FITS file can store multiple HDUs or extensions, each with either an image
or a table or nothing at all (only a header).
+Note that counting of the extensions starts from 0(zero), not 1(one).
+Counting from 0 is forced on us by CFITSIO which directly reads the value you
give with this option (see @ref{CFITSIO}).
+When specifying the name, case is not important so @command{IMAGE},
@command{image} or @command{ImAgE} are equivalent.
+
+CFITSIO has many capabilities to help you find the extension you want, far
beyond the simple extension number and name.
+See CFITSIO manual's ``HDU Location Specification'' section for a very
complete explanation with several examples.
+A @code{#} is appended to the string you specify for the HDU@footnote{With the
@code{#} character, CFITSIO will only read the desired HDU into your memory,
not all the existing HDUs in the fits file.} and the result is put in square
brackets and appended to the FITS file name before calling CFITSIO to read the
contents of the HDU for all the programs in Gnuastro.
@item -s STR
@itemx --searchin=STR
-Where to match/search for columns when the column identifier wasn't a
-number, see @ref{Selecting table columns}. The acceptable values are
-@command{name}, @command{unit}, or @command{comment}. This option is only
-relevant for programs that take table columns as input.
+Where to match/search for columns when the column identifier wasn't a number,
see @ref{Selecting table columns}.
+The acceptable values are @command{name}, @command{unit}, or @command{comment}.
+This option is only relevant for programs that take table columns as input.
@item -I
@itemx --ignorecase
-Ignore case while matching/searching column meta-data (in the field
-specified by the @option{--searchin}). The FITS standard suggests to treat
-the column names as case insensitive, which is strongly recommended here
-also but is not enforced. This option is only relevant for programs that
-take table columns as input.
+Ignore case while matching/searching column meta-data (in the field specified
by the @option{--searchin}).
+The FITS standard suggests to treat the column names as case insensitive,
which is strongly recommended here also but is not enforced.
+This option is only relevant for programs that take table columns as input.
-This option is not relevant to @ref{BuildProgram}, hence in that program the
-short option @option{-I} is used for include directories, not to ignore
-case.
+This option is not relevant to @ref{BuildProgram}, hence in that program the
short option @option{-I} is used for include directories, not to ignore case.
@item -o STR
@itemx --output=STR
-The name of the output file or directory. With this option the automatic
-output names explained in @ref{Automatic output} are ignored.
+The name of the output file or directory. With this option the automatic
output names explained in @ref{Automatic output} are ignored.
@item -T STR
@itemx --type=STR
-The data type of the output depending on the program context. This option
-isn't applicable to some programs like @ref{Fits} and will be ignored by
-them. The different acceptable values to this option are fully described in
-@ref{Numeric data types}.
+The data type of the output depending on the program context.
+This option isn't applicable to some programs like @ref{Fits} and will be
ignored by them.
+The different acceptable values to this option are fully described in
@ref{Numeric data types}.
@item -D
@itemx --dontdelete
-By default, if the output file already exists, Gnuastro's programs will
-silently delete it and put their own outputs in its place. When this option
-is activated, if the output file already exists, the programs will not
-delete it, will warn you, and will abort.
+By default, if the output file already exists, Gnuastro's programs will
silently delete it and put their own outputs in its place.
+When this option is activated, if the output file already exists, the programs
will not delete it, will warn you, and will abort.
@item -K
@itemx --keepinputdir
-In automatic output names, don't remove the directory information of the
-input file names. As explained in @ref{Automatic output}, if no output name
-is specified (with @option{--output}), then the output name will be made in
-the existing directory based on your input's file name (ignoring the
-directory of the input). If you call this option, the directory information
-of the input will be kept and the automatically generated output name will
-be in the same directory as the input (usually with a suffix added). Note
-that his is only relevant if you are running the program in a different
-directory than the input data.
+In automatic output names, don't remove the directory information of the input
file names.
+As explained in @ref{Automatic output}, if no output name is specified (with
@option{--output}), then the output name will be made in the existing directory
based on your input's file name (ignoring the directory of the input).
+If you call this option, the directory information of the input will be kept
and the automatically generated output name will be in the same directory as
the input (usually with a suffix added).
+Note that his is only relevant if you are running the program in a different
directory than the input data.
@item -t STR
@itemx --tableformat=STR
-The output table's type. This option is only relevant when the output is a
-table and its format cannot be deduced from its filename. For example, if a
-name ending in @file{.fits} was given to @option{--output}, then the
-program knows you want a FITS table. But there are two types of FITS
-tables: FITS ASCII, and FITS binary. Thus, with this option, the program is
-able to identify which type you want. The currently recognized values to
-this option are:
+The output table's type.
+This option is only relevant when the output is a table and its format cannot
be deduced from its filename.
+For example, if a name ending in @file{.fits} was given to @option{--output},
then the program knows you want a FITS table.
+But there are two types of FITS tables: FITS ASCII, and FITS binary.
+Thus, with this option, the program is able to identify which type you want.
+The currently recognized values to this option are:
@table @command
@item txt
@@ -7834,66 +5891,44 @@ A FITS binary table (see @ref{Recognized table
formats}).
@node Processing options, Operating mode options, Input output options, Common
options
@subsubsection Processing options
-Some processing steps are common to several programs, so they are defined
-as common options to all programs. Note that this class of common options
-is thus necessarily less common between all the programs than those
-described in @ref{Input output options}, or @ref{Operating mode options}
-options. Also, if they are irrelevant for a program, these options will not
-display in the @option{--help} output of the program.
+Some processing steps are common to several programs, so they are defined as
common options to all programs.
+Note that this class of common options is thus necessarily less common between
all the programs than those described in @ref{Input output options}, or
@ref{Operating mode options} options.
+Also, if they are irrelevant for a program, these options will not display in
the @option{--help} output of the program.
@table @option
@item --minmapsize=INT
-The minimum size (in bytes) to store the contents of each main processing
-array of a program as a file (on the non-volatile HDD/SSD), not in
-RAM. This can be very useful when you have limited RAM, but need to process
-large datasets which can be very memory intensive. In such scenarios,
-without this option, the program will crash.
-
-A random filename is assigned to the array. This file will keep the
-contents of the array as long as it is necessary and the program will
-delete it as soon as its not necessary any more.
-
-If the @file{.gnuastro} directory exists and is writable, then the random
-file will be placed in there. Otherwise, the @file{.gnuastro_mmap}
-directory will be checked. If @file{.gnuastro_mmap} does not exist, or
-@file{.gnuastro} is not writable, the random file will be directly written
-in the current directory with the @file{.gnuastro_mmap_} prefix.
-
-By default, the name of the created file, and its size (in bytes) is
-printed by the program when it is created and later, when its
-deleted/freed. These messages are useful to the user who has enough RAM,
-but has forgot to increase the value to @code{--minmapsize} (this is often
-the case). To supress/disable such messages, use the @code{--quietmmap}
-option.
+The minimum size (in bytes) to store the contents of each main processing
array of a program as a file (on the non-volatile HDD/SSD), not in RAM.
+This can be very useful when you have limited RAM, but need to process large
datasets which can be very memory intensive.
+In such scenarios, without this option, the program will crash.
+
+A random filename is assigned to the array.
+This file will keep the contents of the array as long as it is necessary and
the program will delete it as soon as its not necessary any more.
+
+If the @file{.gnuastro} directory exists and is writable, then the random file
will be placed in there.
+Otherwise, the @file{.gnuastro_mmap} directory will be checked.
+If @file{.gnuastro_mmap} does not exist, or @file{.gnuastro} is not writable,
the random file will be directly written in the current directory with the
@file{.gnuastro_mmap_} prefix.
+
+By default, the name of the created file, and its size (in bytes) is printed
by the program when it is created and later, when its deleted/freed.
+These messages are useful to the user who has enough RAM, but has forgot to
increase the value to @code{--minmapsize} (this is often the case).
+To suppress/disable such messages, use the @code{--quietmmap} option.
+
+When this option has a value of @code{0} (zero, strongly discouraged, see box
below), all arrays that use this feature in a program will actually be placed
in a file (not in RAM).
+When this option is larger than all the input datasets, all arrays will be
definitely allocated in RAM and the program will run MUCH faster.
-When this option has a value of @code{0} (zero, strongly discouraged, see
-box below), all arrays that use this feature in a program will actually be
-placed in a file (not in RAM). When this option is larger than all the
-input datasets, all arrays will be definitely allocated in RAM and the
-program will run MUCH faster.
-
-Please note that using a non-volatile file (in the HDD/SDD) instead of RAM
-can significantly increase the program's running time, especially on HDDs
-(where read/write is slower). So it is best to give this option large
-values by default. You can then decrease it for a specific program's
-invocation on a large input after you see memory issues arise (for example
-an error, or the program not aborting and fully consuming your memory).
-
-The random file will be deleted once it is no longer needed by the
-program. The @file{.gnuastro} directory will also be deleted if it has no
-other contents (you may also have configuration files in this directory,
-see @ref{Configuration files}). If you see randomly named files remaining
-in this directory when the program finishes normally, please send us a bug
-report so we address the problem, see @ref{Report a bug}.
+Please note that using a non-volatile file (in the HDD/SDD) instead of RAM can
significantly increase the program's running time, especially on HDDs (where
read/write is slower).
+So it is best to give this option large values by default.
+You can then decrease it for a specific program's invocation on a large input
after you see memory issues arise (for example an error, or the program not
aborting and fully consuming your memory).
+
+The random file will be deleted once it is no longer needed by the program.
+The @file{.gnuastro} directory will also be deleted if it has no other
contents (you may also have configuration files in this directory, see
@ref{Configuration files}).
+If you see randomly named files remaining in this directory when the program
finishes normally, please send us a bug report so we address the problem, see
@ref{Report a bug}.
@cartouche
@noindent
-@strong{Limited number of memory-mapped files:} The operating system
-kernels usually support a limited number of memory-mapped files. Therefore
-never set @code{--minmapsize} to zero or a small number of bytes (so too
-many files are created). If the kernel capacity is exceeded, the program
-will crash.
+@strong{Limited number of memory-mapped files:} The operating system kernels
usually support a limited number of memory-mapped files.
+Therefore never set @code{--minmapsize} to zero or a small number of bytes (so
too many files are created).
+If the kernel capacity is exceeded, the program will crash.
@end cartouche
@item --quietmmap
@@ -7903,70 +5938,47 @@ for more.
@item -Z INT[,INT[,...]]
@itemx --tilesize=[,INT[,...]]
-The size of regular tiles for tessellation, see @ref{Tessellation}. For
-each dimension an integer length (in units of data-elements or pixels) is
-necessary. If the number of input dimensions is different from the number
-of values given to this option, the program will stop with an error. Values
-must be separated by commas (@key{,}) and can also be fractions (for
-example @code{4/2}). If they are fractions, the result must be an integer,
-otherwise an error will be printed.
+The size of regular tiles for tessellation, see @ref{Tessellation}.
+For each dimension an integer length (in units of data-elements or pixels) is
necessary.
+If the number of input dimensions is different from the number of values given
to this option, the program will stop with an error.
+Values must be separated by commas (@key{,}) and can also be fractions (for
example @code{4/2}).
+If they are fractions, the result must be an integer, otherwise an error will
be printed.
@item -M INT[,INT[,...]]
@itemx --numchannels=INT[,INT[,...]]
-The number of channels for larger input tessellation, see
-@ref{Tessellation}. The number and types of acceptable values are similar
-to @option{--tilesize}. The only difference is that instead of length, the
-integers values given to this option represent the @emph{number} of
-channels, not their size.
+The number of channels for larger input tessellation, see @ref{Tessellation}.
+The number and types of acceptable values are similar to @option{--tilesize}.
+The only difference is that instead of length, the integers values given to
this option represent the @emph{number} of channels, not their size.
@item -F FLT
@itemx --remainderfrac=FLT
-The fraction of remainder size along all dimensions to add to the first
-tile. See @ref{Tessellation} for a complete description. This option is
-only relevant if @option{--tilesize} is not exactly divisible by the input
-dataset's size in a dimension. If the remainder size is larger than this
-fraction (compared to @option{--tilesize}), then the remainder size will be
-added with one regular tile size and divided between two tiles at the start
-and end of the given dimension.
+The fraction of remainder size along all dimensions to add to the first tile.
+See @ref{Tessellation} for a complete description.
+This option is only relevant if @option{--tilesize} is not exactly divisible
by the input dataset's size in a dimension.
+If the remainder size is larger than this fraction (compared to
@option{--tilesize}), then the remainder size will be added with one regular
tile size and divided between two tiles at the start and end of the given
dimension.
@item --workoverch
-Ignore the channel borders for the high-level job of the given
-application. As a result, while the channel borders are respected in
-defining the small tiles (such that no tile will cross a channel border),
-the higher-level program operation will ignore them, see
-@ref{Tessellation}.
+Ignore the channel borders for the high-level job of the given application.
+As a result, while the channel borders are respected in defining the small
tiles (such that no tile will cross a channel border), the higher-level program
operation will ignore them, see @ref{Tessellation}.
@item --checktiles
-Make a FITS file with the same dimensions as the input but each pixel is
-replaced with the ID of the tile that it is associated with. Note that the
-tile IDs start from 0. See @ref{Tessellation} for more on Tiling an image
-in Gnuastro.
+Make a FITS file with the same dimensions as the input but each pixel is
replaced with the ID of the tile that it is associated with.
+Note that the tile IDs start from 0.
+See @ref{Tessellation} for more on Tiling an image in Gnuastro.
@item --oneelempertile
-When showing the tile values (for example with @option{--checktiles}, or
-when the program's output is tessellated) only use one element for each
-tile. This can be useful when only the relative values given to each tile
-compared to the rest are important or need to be checked. Since the tiles
-usually have a large number of pixels within them the output will be much
-smaller, and so easier to read, write, store, or send.
-
-Note that when the full input size in any dimension is not exactly
-divisible by the given @option{--tilesize} in that dimension, the edge
-tile(s) will have different sizes (in units of the input's size), see
-@option{--remainderfrac}. But with this option, all displayed values are
-going to have the (same) size of one data-element. Hence, in such cases,
-the image proportions are going to be slightly different with this
-option.
+When showing the tile values (for example with @option{--checktiles}, or when
the program's output is tessellated) only use one element for each tile.
+This can be useful when only the relative values given to each tile compared
to the rest are important or need to be checked.
+Since the tiles usually have a large number of pixels within them the output
will be much smaller, and so easier to read, write, store, or send.
-If your input image is not exactly divisible by the tile size and you want
-one value per tile for some higher-level processing, all is not lost
-though. You can see how many pixels were within each tile (for example to
-weight the values or discard some for later processing) with Gnuastro's
-Statistics (see @ref{Statistics}) as shown below. The output FITS file is
-going to have two extensions, one with the median calculated on each tile
-and one with the number of elements that each tile covers. You can then use
-the @code{where} operator in @ref{Arithmetic} to set the values of all
-tiles that don't have the regular area to a blank value.
+Note that when the full input size in any dimension is not exactly divisible
by the given @option{--tilesize} in that dimension, the edge tile(s) will have
different sizes (in units of the input's size), see @option{--remainderfrac}.
+But with this option, all displayed values are going to have the (same) size
of one data-element.
+Hence, in such cases, the image proportions are going to be slightly different
with this option.
+
+If your input image is not exactly divisible by the tile size and you want one
value per tile for some higher-level processing, all is not lost though.
+You can see how many pixels were within each tile (for example to weight the
values or discard some for later processing) with Gnuastro's Statistics (see
@ref{Statistics}) as shown below.
+The output FITS file is going to have two extensions, one with the median
calculated on each tile and one with the number of elements that each tile
covers.
+You can then use the @code{where} operator in @ref{Arithmetic} to set the
values of all tiles that don't have the regular area to a blank value.
@example
$ aststatistics --median --number --ontile input.fits \
@@ -7990,16 +6002,13 @@ elements, keep the non-blank elements untouched.
@cindex Taxicab metric
@cindex Manhattan metric
@cindex Metric: Manhattan, Taxicab, Radial
-The metric to use for finding nearest neighbors. Currently it only accepts
-the Manhattan (or taxicab) metric with @code{manhattan}, or the radial
-metric with @code{radial}.
+The metric to use for finding nearest neighbors.
+Currently it only accepts the Manhattan (or taxicab) metric with
@code{manhattan}, or the radial metric with @code{radial}.
-The Manhattan distance between two points is defined with
-@mymath{|\Delta{x}|+|\Delta{y}|}. Thus the Manhattan metric has the
-advantage of being fast, but at the expense of being less accurate. The
-radial distance is the standard definition of distance in a Euclidean
-space: @mymath{\sqrt{\Delta{x}^2+\Delta{y}^2}}. It is accurate, but the
-multiplication and square root can slow down the processing.
+The Manhattan distance between two points is defined with
@mymath{|\Delta{x}|+|\Delta{y}|}.
+Thus the Manhattan metric has the advantage of being fast, but at the expense
of being less accurate.
+The radial distance is the standard definition of distance in a Euclidean
space: @mymath{\sqrt{\Delta{x}^2+\Delta{y}^2}}.
+It is accurate, but the multiplication and square root can slow down the
processing.
@item --interpnumngb=INT
The number of nearby non-blank neighbors to use for interpolation.
@@ -8008,145 +6017,97 @@ The number of nearby non-blank neighbors to use for
interpolation.
@node Operating mode options, , Processing options, Common options
@subsubsection Operating mode options
-Another group of options that are common to all the programs in Gnuastro
-are those to do with the general operation of the programs. The explanation
-for those that are not only limited to Gnuastro but are common to all GNU
-programs start with (GNU option).
+Another group of options that are common to all the programs in Gnuastro are
those to do with the general operation of the programs.
+The explanation for those that are not only limited to Gnuastro but are common
to all GNU programs start with (GNU option).
@vtable @option
@item --
-(GNU option) Stop parsing the command-line. This option can be useful in
-scripts or when using the shell history. Suppose you have a long list of
-options, and want to see if removing some of them (to read from
-configuration files, see @ref{Configuration files}) can give a better
-result. If the ones you want to remove are the last ones on the
-command-line, you don't have to delete them, you can just add @option{--}
-before them and if you don't get what you want, you can remove the
-@option{--} and get the same initial result.
+(GNU option) Stop parsing the command-line.
+This option can be useful in scripts or when using the shell history.
+Suppose you have a long list of options, and want to see if removing some of
them (to read from configuration files, see @ref{Configuration files}) can give
a better result.
+If the ones you want to remove are the last ones on the command-line, you
don't have to delete them, you can just add @option{--} before them and if you
don't get what you want, you can remove the @option{--} and get the same
initial result.
@item --usage
-(GNU option) Only print the options and arguments and abort. This is very
-useful for when you know the what the options do, and have just forgot
-their long/short identifiers, see @ref{--usage}.
+(GNU option) Only print the options and arguments and abort.
+This is very useful for when you know the what the options do, and have just
forgot their long/short identifiers, see @ref{--usage}.
@item -?
@itemx --help
-(GNU option) Print all options with an explanation and abort. Adding this
-option will print all the options in their short and long formats, also
-displaying which ones need a value if they are called (with an @option{=}
-after the long format followed by a string specifying the format, see
-@ref{Options}). A short explanation is also given for what the option is
-for. The program will quit immediately after the message is printed and
-will not do any form of processing, see @ref{--help}.
+(GNU option) Print all options with an explanation and abort.
+Adding this option will print all the options in their short and long formats,
also displaying which ones need a value if they are called (with an @option{=}
after the long format followed by a string specifying the format, see
@ref{Options}).
+A short explanation is also given for what the option is for.
+The program will quit immediately after the message is printed and will not do
any form of processing, see @ref{--help}.
@item -V
@itemx --version
-(GNU option) Print a short message, showing the full name, version,
-copyright information and program authors and abort. On the first line, it
-will print the official name (not executable name) and version number of
-the program. Following this is a blank line and a copyright
-information. The program will not run.
+(GNU option) Print a short message, showing the full name, version, copyright
information and program authors and abort.
+On the first line, it will print the official name (not executable name) and
version number of the program.
+Following this is a blank line and a copyright information.
+The program will not run.
@item -q
@itemx --quiet
-Don't report steps. All the programs in Gnuastro that have multiple major
-steps will report their steps for you to follow while they are
-operating. If you do not want to see these reports, you can call this
-option and only error/warning messages will be printed. If the steps are
-done very fast (depending on the properties of your input) disabling these
-reports will also decrease running time.
+Don't report steps.
+All the programs in Gnuastro that have multiple major steps will report their
steps for you to follow while they are operating.
+If you do not want to see these reports, you can call this option and only
error/warning messages will be printed.
+If the steps are done very fast (depending on the properties of your input)
disabling these reports will also decrease running time.
@item --cite
-Print all necessary information to cite and acknowledge Gnuastro in your
-published papers. With this option, the programs will print the Bib@TeX{}
-entry to include in your paper for Gnuastro in general, and the particular
-program's paper (if that program comes with a separate paper). It will also
-print the necessary acknowledgment statement to add in the respective
-section of your paper and it will abort. For a more complete explanation,
-please see @ref{Acknowledgments}.
-
-Citations and acknowledgments are vital for the continued work on
-Gnuastro. Gnuastro started, and is continued, based on separate research
-projects. So if you find any of the tools offered in Gnuastro to be useful
-in your research, please use the output of this command to cite and
-acknowledge the program (and Gnuastro) in your research paper. Thank you.
-
-Gnuastro is still new, there is no separate paper only devoted to Gnuastro
-yet. Therefore currently the paper to cite for Gnuastro is the paper for
-NoiseChisel which is the first published paper introducing Gnuastro to the
-astronomical community. Upon reaching a certain point, a paper completely
-devoted to describing Gnuastro's many functionalities will be published,
-see @ref{GNU Astronomy Utilities 1.0}.
+Print all necessary information to cite and acknowledge Gnuastro in your
published papers.
+With this option, the programs will print the Bib@TeX{} entry to include in
your paper for Gnuastro in general, and the particular program's paper (if that
program comes with a separate paper).
+It will also print the necessary acknowledgment statement to add in the
respective section of your paper and it will abort.
+For a more complete explanation, please see @ref{Acknowledgments}.
+
+Citations and acknowledgments are vital for the continued work on Gnuastro.
+Gnuastro started, and is continued, based on separate research projects.
+So if you find any of the tools offered in Gnuastro to be useful in your
research, please use the output of this command to cite and acknowledge the
program (and Gnuastro) in your research paper.
+Thank you.
+
+Gnuastro is still new, there is no separate paper only devoted to Gnuastro yet.
+Therefore currently the paper to cite for Gnuastro is the paper for
NoiseChisel which is the first published paper introducing Gnuastro to the
astronomical community.
+Upon reaching a certain point, a paper completely devoted to describing
Gnuastro's many functionalities will be published, see @ref{GNU Astronomy
Utilities 1.0}.
@item -P
@itemx --printparams
-With this option, Gnuastro's programs will read your command-line options
-and all the configuration files. If there is no problem (like a missing
-parameter or a value in the wrong format or range) and immediately before
-actually running, the programs will print the full list of option names,
-values and descriptions, sorted and grouped by context and abort. They will
-also report the version number, the date they were configured on your
-system and the time they were reported.
-
-As an example, you can give your full command-line options and even the
-input and output file names and finally just add @option{-P} to check if
-all the parameters are finely set. If everything is OK, you can just run
-the same command (easily retrieved from the shell history, with the top
-arrow key) and simply remove the last two characters that showed this
-option.
+With this option, Gnuastro's programs will read your command-line options and
all the configuration files.
+If there is no problem (like a missing parameter or a value in the wrong
format or range) and immediately before actually running, the programs will
print the full list of option names, values and descriptions, sorted and
grouped by context and abort.
+They will also report the version number, the date they were configured on
your system and the time they were reported.
-Since no program will actually start its processing when this option
-is called, the otherwise mandatory arguments for each program (for
-example input image or catalog files) are no longer required when you
-call this option.
+As an example, you can give your full command-line options and even the input
and output file names and finally just add @option{-P} to check if all the
parameters are finely set.
+If everything is OK, you can just run the same command (easily retrieved from
the shell history, with the top arrow key) and simply remove the last two
characters that showed this option.
+
+No program will actually start its processing when this option is called.
+The otherwise mandatory arguments for each program (for example input image or
catalog files) are no longer required when you call this option.
@item --config=STR
-Parse @option{STR} as a configuration file immediately when this option is
-confronted (see @ref{Configuration files}). The @option{--config} option
-can be called multiple times in one run of any Gnuastro program on the
-command-line or in the configuration files. In any case, it will be
-immediately read (before parsing the rest of the options on the
-command-line, or lines in a configuration file).
-
-Note that by definition, options on the command-line still take precedence
-over those in any configuration file, including the file(s) given to this
-option if they are called before it. Also see @option{--lastconfig} and
-@option{--onlyversion} on how this option can be used for reproducible
-results. You can use @option{--checkconfig} (below) to check/confirm the
-parsing of configuration files.
+Parse @option{STR} as a configuration file immediately when this option is
confronted (see @ref{Configuration files}).
+The @option{--config} option can be called multiple times in one run of any
Gnuastro program on the command-line or in the configuration files.
+In any case, it will be immediately read (before parsing the rest of the
options on the command-line, or lines in a configuration file).
+
+Note that by definition, options on the command-line still take precedence
over those in any configuration file, including the file(s) given to this
option if they are called before it.
+Also see @option{--lastconfig} and @option{--onlyversion} on how this option
can be used for reproducible results.
+You can use @option{--checkconfig} (below) to check/confirm the parsing of
configuration files.
@item --checkconfig
-Print options and their values, within the command-line or configuration
-files, as they are parsed (see @ref{Configuration file precedence}). If an
-option has already been set, or is ignored by the program, this option will
-also inform you with special values like @code{--ALREADY-SET--}. Only
-options that are parsed after this option are printed, so to see the
-parsing of all input options, it is recommended to put this option
-immediately after the program name before any other options.
+Print options and their values, within the command-line or configuration
files, as they are parsed (see @ref{Configuration file precedence}).
+If an option has already been set, or is ignored by the program, this option
will also inform you with special values like @code{--ALREADY-SET--}.
+Only options that are parsed after this option are printed, so to see the
parsing of all input options, it is recommended to put this option immediately
after the program name before any other options.
@cindex Debug
-This is a very good option to confirm where the value of each option is has
-been defined in scenarios where there are multiple configuration files (for
-debugging).
+This is a very good option to confirm where the value of each option is has
been defined in scenarios where there are multiple configuration files (for
debugging).
@item -S
@itemx --setdirconf
-Update the current directory configuration file for the Gnuastro program
-and quit. The full set of command-line and configuration file options will
-be parsed and options with a value will be written in the current directory
-configuration file for this program (see @ref{Configuration files}). If the
-configuration file or its directory doesn't exist, it will be created. If a
-configuration file exists it will be replaced (after it, and all other
-configuration files have been read). In any case, the program will not run.
-
-This is the recommended method@footnote{Alternatively, you can use your
-favorite text editor.} to edit/set the configuration file for all future
-calls to Gnuastro's programs. It will internally check if your values are
-in the correct range and type and save them according to the configuration
-file format, see @ref{Configuration file format}. So if there are
-unreasonable values to some options, the program will notify you and abort
-before writing the final configuration file.
+Update the current directory configuration file for the Gnuastro program and
quit.
+The full set of command-line and configuration file options will be parsed and
options with a value will be written in the current directory configuration
file for this program (see @ref{Configuration files}).
+If the configuration file or its directory doesn't exist, it will be created.
+If a configuration file exists it will be replaced (after it, and all other
configuration files have been read).
+In any case, the program will not run.
+
+This is the recommended method@footnote{Alternatively, you can use your
favorite text editor.} to edit/set the configuration file for all future calls
to Gnuastro's programs.
+It will internally check if your values are in the correct range and type and
save them according to the configuration file format, see @ref{Configuration
file format}.
+So if there are unreasonable values to some options, the program will notify
you and abort before writing the final configuration file.
When this option is called, the otherwise mandatory arguments, for
example input image or catalog file(s), are no longer mandatory (since
@@ -8154,51 +6115,32 @@ the program will not run).
@item -U
@itemx --setusrconf
-Update the user configuration file and quit (see @ref{Configuration
-files}). See explanation under @option{--setdirconf} for more details.
+Update the user configuration file and quit (see @ref{Configuration files}).
+See explanation under @option{--setdirconf} for more details.
@item --lastconfig
-This is the last configuration file that must be read. When this option is
-confronted in any stage of reading the options (on the command-line or in a
-configuration file), no other configuration file will be parsed, see
-@ref{Configuration file precedence} and @ref{Current directory and User
-wide}. Like all on/off options, on the command-line, this option doesn't
-take any values. But in a configuration file, it takes the values of
-@option{0} or @option{1}, see @ref{Configuration file format}. If it is
-present in a configuration file with a value of @option{0}, then all later
-occurrences of this option will be ignored.
+This is the last configuration file that must be read.
+When this option is confronted in any stage of reading the options (on the
command-line or in a configuration file), no other configuration file will be
parsed, see @ref{Configuration file precedence} and @ref{Current directory and
User wide}.
+Like all on/off options, on the command-line, this option doesn't take any
values.
+But in a configuration file, it takes the values of @option{0} or @option{1},
see @ref{Configuration file format}.
+If it is present in a configuration file with a value of @option{0}, then all
later occurrences of this option will be ignored.
@item --onlyversion=STR
-Only run the program if Gnuastro's version is exactly equal to @option{STR}
-(see @ref{Version numbering}). Note that it is not compared as a number,
-but as a string of characters, so @option{0}, or @option{0.0} and
-@option{0.00} are different. If the running Gnuastro version is different,
-then this option will report an error and abort as soon as it is
-confronted on the command-line or in a configuration file. If the running
-Gnuastro version is the same as @option{STR}, then the program will run as
-if this option was not called.
-
-This is useful if you want your results to be exactly reproducible and not
-mistakenly run with an updated/newer or older version of the
-program. Besides internal algorithmic/behavior changes in programs, the
-existence of options or their names might change between versions
-(especially in these earlier versions of Gnuastro).
-
-Hence, when using this option (probably in a script or in a configuration
-file), be sure to call it before other options. The benefit is that, when
-the version differs, the other options won't be parsed and you, or your
-collaborators/users, won't get errors saying an option in your
-configuration doesn't exist in the running version of the program.
-
-Here is one example of how this option can be used in conjunction with the
-@option{--lastconfig} option. Let's assume that you were satisfied with the
-results of this command: @command{astnoisechisel image.fits --snquant=0.95}
-(along with various options set in various configuration files). You can
-save the state of NoiseChisel and reproduce that exact result on
-@file{image.fits} later by following these steps (the the extra spaces, and
-@key{\}, are only for easy readability, if you want to try it out, only one
-space between each token is enough).
+Only run the program if Gnuastro's version is exactly equal to @option{STR}
(see @ref{Version numbering}).
+Note that it is not compared as a number, but as a string of characters, so
@option{0}, or @option{0.0} and @option{0.00} are different.
+If the running Gnuastro version is different, then this option will report an
error and abort as soon as it is confronted on the command-line or in a
configuration file.
+If the running Gnuastro version is the same as @option{STR}, then the program
will run as if this option was not called.
+
+This is useful if you want your results to be exactly reproducible and not
mistakenly run with an updated/newer or older version of the program.
+Besides internal algorithmic/behavior changes in programs, the existence of
options or their names might change between versions (especially in these
earlier versions of Gnuastro).
+
+Hence, when using this option (probably in a script or in a configuration
file), be sure to call it before other options.
+The benefit is that, when the version differs, the other options won't be
parsed and you, or your collaborators/users, won't get errors saying an option
in your configuration doesn't exist in the running version of the program.
+
+Here is one example of how this option can be used in conjunction with the
@option{--lastconfig} option.
+Let's assume that you were satisfied with the results of this command:
@command{astnoisechisel image.fits --snquant=0.95} (along with various options
set in various configuration files).
+You can save the state of NoiseChisel and reproduce that exact result on
@file{image.fits} later by following these steps (the extra spaces, and
@key{\}, are only for easy readability, if you want to try it out, only one
space between each token is enough).
@example
$ echo "onlyversion X.XX" > reproducible.conf
@@ -8207,55 +6149,40 @@ $ astnoisechisel image.fits --snquant=0.95 -P
\
>> reproducible.conf
@end example
-@option{--onlyversion} was available from Gnuastro 0.0, so putting it
-immediately at the start of a configuration file will ensure that later,
-you (or others using different version) won't get a non-recognized option
-error in case an option was added/removed. @option{--lastconfig} will
-inform the installed NoiseChisel to not parse any other configuration
-files. This is done because we don't want the user's user-wide or system
-wide option values affecting our results. Finally, with the third command,
-which has a @option{-P} (short for @option{--printparams}), NoiseChisel
-will print all the option values visible to it (in all the configuration
-files) and the shell will append them to @file{reproduce.conf}. Hence, you
-don't have to worry about remembering the (possibly) different options in
-the different configuration files.
+@option{--onlyversion} was available from Gnuastro 0.0, so putting it
immediately at the start of a configuration file will ensure that later, you
(or others using different version) won't get a non-recognized option error in
case an option was added/removed.
+@option{--lastconfig} will inform the installed NoiseChisel to not parse any
other configuration files.
+This is done because we don't want the user's user-wide or system wide option
values affecting our results.
+Finally, with the third command, which has a @option{-P} (short for
@option{--printparams}), NoiseChisel will print all the option values visible
to it (in all the configuration files) and the shell will append them to
@file{reproduce.conf}.
+Hence, you don't have to worry about remembering the (possibly) different
options in the different configuration files.
-Afterwards, if you run NoiseChisel as shown below (telling it to read this
-configuration file with the @file{--config} option). You can be sure that
-there will either be an error (for version mismatch) or it will produce
-exactly the same result that you got before.
+Afterwards, if you run NoiseChisel as shown below (telling it to read this
configuration file with the @file{--config} option).
+You can be sure that there will either be an error (for version mismatch) or
it will produce exactly the same result that you got before.
@example
$ astnoisechisel --config=reproducible.conf
@end example
@item --log
-Some programs can generate extra information about their outputs in a log
-file. When this option is called in those programs, the log file will also
-be printed. If the program doesn't generate a log file, this option is
-ignored.
+Some programs can generate extra information about their outputs in a log file.
+When this option is called in those programs, the log file will also be
printed.
+If the program doesn't generate a log file, this option is ignored.
@cartouche
@noindent
-@strong{@option{--log} isn't thread-safe}: The log file usually has a fixed
-name. Therefore if two simultaneous calls (with @option{--log}) of a
-program are made in the same directory, the program will try to write to
-the same file. This will cause problems like unreasonable log file,
-undefined behavior, or a crash.
+@strong{@option{--log} isn't thread-safe}: The log file usually has a fixed
name.
+Therefore if two simultaneous calls (with @option{--log}) of a program are
made in the same directory, the program will try to write to he same file.
+This will cause problems like unreasonable log file, undefined behavior, or a
crash.
@end cartouche
@cindex CPU threads, set number
@cindex Number of CPU threads to use
@item -N INT
@itemx --numthreads=INT
-Use @option{INT} CPU threads when running a Gnuastro program (see
-@ref{Multi-threaded operations}). If the value is zero (@code{0}), or this
-option is not given on the command-line or any configuration file, the
-value will be determined at run-time: the maximum number of threads
-available to the system when you run a Gnuastro program.
+Use @option{INT} CPU threads when running a Gnuastro program (see
@ref{Multi-threaded operations}).
+If the value is zero (@code{0}), or this option is not given on the
command-line or any configuration file, the value will be determined at
run-time: the maximum number of threads available to the system when you run a
Gnuastro program.
-Note that multi-threaded programming is only relevant to some programs. In
-others, this option will be ignored.
+Note that multi-threaded programming is only relevant to some programs.
+In others, this option will be ignored.
@end vtable
@@ -8268,94 +6195,67 @@ others, this option will be ignored.
@cindex Standard input
@cindex Stream: standard input
-The most common way to feed the primary/first input dataset into a program
-is to give its filename as an argument (discussed in @ref{Arguments}). When
-you want to run a series of programs in sequence, this means that each will
-have to keep the output of each program in a separate file and re-type that
-file's name in the next command. This can be very slow and frustrating
-(mis-typing a file's name).
+The most common way to feed the primary/first input dataset into a program is
to give its filename as an argument (discussed in @ref{Arguments}).
+When you want to run a series of programs in sequence, this means that each
will have to keep the output of each program in a separate file and re-type
that file's name in the next command.
+This can be very slow and frustrating (mis-typing a file's name).
@cindex Standard output stream
@cindex Stream: standard output
-To solve the problem, the founders of Unix defined pipes to directly feed
-the output of one program (its ``Standard output'' stream) into the
-``standard input'' of a next program. This removes the need to make
-temporary files between separate processes and became one of the best
-demonstrations of the Unix-way, or Unix philosophy.
-
-Every program has three streams identifying where it reads/writes non-file
-inputs/outputs: @emph{Standard input}, @emph{Standard output}, and
-@emph{Standard error}. When a program is called alone, all three are
-directed to the terminal that you are using. If it needs an input, it will
-prompt you for one and you can type it in. Or, it prints its results in the
-terminal for you to see.
-
-For example, say you have a FITS table/catalog containing the B and V band
-magnitudes (@code{MAG_B} and @code{MAG_V} columns) of a selection of
-galaxies along with many other columns. If you want to see only these two
-columns in your terminal, can use Gnuastro's @ref{Table} program like
-below:
+To solve the problem, the founders of Unix defined pipes to directly feed the
output of one program (its ``Standard output'' stream) into the ``standard
input'' of a next program.
+This removes the need to make temporary files between separate processes and
became one of the best demonstrations of the Unix-way, or Unix philosophy.
+
+Every program has three streams identifying where it reads/writes non-file
inputs/outputs: @emph{Standard input}, @emph{Standard output}, and
@emph{Standard error}.
+When a program is called alone, all three are directed to the terminal that
you are using.
+If it needs an input, it will prompt you for one and you can type it in.
+Or, it prints its results in the terminal for you to see.
+
+For example, say you have a FITS table/catalog containing the B and V band
magnitudes (@code{MAG_B} and @code{MAG_V} columns) of a selection of galaxies
along with many other columns.
+If you want to see only these two columns in your terminal, can use Gnuastro's
@ref{Table} program like below:
@example
$ asttable cat.fits -cMAG_B,MAG_V
@end example
-Through the Unix pipe mechanism, when the shell confronts the pipe
-character (@key{|}), it connects the standard output of the program before
-the pipe, to the standard input of the program after it. So it is literally
-a ``pipe'': everything that you would see printed by the first program on
-the command (without any pipe), is now passed to the second program (and
-not seen by you).
+Through the Unix pipe mechanism, when the shell confronts the pipe character
(@key{|}), it connects the standard output of the program before the pipe, to
the standard input of the program after it.
+So it is literally a ``pipe'': everything that you would see printed by the
first program on the command (without any pipe), is now passed to the second
program (and not seen by you).
@cindex AWK
@cindex GNU AWK
-To continue the previous example, let's say you want to see the B-V
-color. To do this, you can pipe Table's output to AWK (a wonderful tool for
-processing things like plain text tables):
+To continue the previous example, let's say you want to see the B-V color.
+To do this, you can pipe Table's output to AWK (a wonderful tool for
processing things like plain text tables):
@example
$ asttable cat.fits -cMAG_B,MAG_V | awk '@{print $1-$2@}'
@end example
-But understanding the distribution by visually seeing all the numbers under
-each other is not too useful! You can therefore feed this single column
-information into @ref{Statistics} to give you a general feeling of the
-distribution with the same command:
+But understanding the distribution by visually seeing all the numbers under
each other is not too useful! You can therefore feed this single column
information into @ref{Statistics} to give you a general feeling of the
distribution with the same command:
@example
$ asttable cat.fits -cMAG_B,MAG_V | awk '@{print $1-$2@}' | aststatistics
@end example
-Gnuastro's programs that accept input from standard input, only look into
-the Standard input stream if there is no first argument. In other words,
-arguments take precedence over Standard input. When no argument is
-provided, the programs check if the standard input stream is already full
-or not (output from another program is waiting to be used). If data is
-present in the standard input stream, it is used.
+Gnuastro's programs that accept input from standard input, only look into the
Standard input stream if there is no first argument.
+In other words, arguments take precedence over Standard input.
+When no argument is provided, the programs check if the standard input stream
is already full or not (output from another program is waiting to be used).
+If data is present in the standard input stream, it is used.
-When the standard input is empty, the program will wait
-@option{--stdintimeout} micro-seconds for you to manually enter the first
-line (ending with a new-line character, or the @key{ENTER} key, see
-@ref{Input output options}). If it detects the first line in this time,
-there is no more time limit, and you can manually write/type all the lines
-for as long as it takes. To inform the program that Standard input has
-finished, press @key{CTRL-D} after a new line. If the program doesn't catch
-the first line before the time-out finishes, it will abort with an error
-saying that no input was provided.
+When the standard input is empty, the program will wait
@option{--stdintimeout} micro-seconds for you to manually enter the first line
(ending with a new-line character, or the @key{ENTER} key, see @ref{Input
output options}).
+If it detects the first line in this time, there is no more time limit, and
you can manually write/type all the lines for as long as it takes.
+To inform the program that Standard input has finished, press @key{CTRL-D}
after a new line.
+If the program doesn't catch the first line before the time-out finishes, it
will abort with an error saying that no input was provided.
@cartouche
@noindent
-@strong{Manual input in Standard input is discarded: } Be careful that when
-you manually fill the Standard input, the data will be discarded once the
-program finishes and reproducing the result will be impossible. Therefore
-this form of providing input is only good for temporary tests.
+@strong{Manual input in Standard input is discarded:}
+Be careful that when you manually fill the Standard input, the data will be
discarded once the program finishes and reproducing the result will be
impossible.
+Therefore this form of providing input is only good for temporary tests.
@end cartouche
@cartouche
@noindent
-@strong{Standard input currently only for plain text: } Currently Standard
-input only works for plain text inputs like the example above. We will
-later allow FITS files into the programs through standard input also.
+@strong{Standard input currently only for plain text:}
+Currently Standard input only works for plain text inputs like the example
above.
+We will later allow FITS files into the programs through standard input also.
@end cartouche
@@ -8367,14 +6267,10 @@ later allow FITS files into the programs through
standard input also.
@cindex Necessary parameters
@cindex Default option values
@cindex File system Hierarchy Standard
-Each program needs a certain number of parameters to run. Supplying
-all the necessary parameters each time you run the program is very
-frustrating and prone to errors. Therefore all the programs read the
-values for the necessary options you have not given in the command
-line from one of several plain text files (which you can view and edit
-with any text editor). These files are known as configuration files
-and are usually kept in a directory named @file{etc/} according to the
-file system hierarchy
+Each program needs a certain number of parameters to run.
+Supplying all the necessary parameters each time you run the program is very
frustrating and prone to errors.
+Therefore all the programs read the values for the necessary options you have
not given in the command line from one of several plain text files (which you
can view and edit with any text editor).
+These files are known as configuration files and are usually kept in a
directory named @file{etc/} according to the file system hierarchy
standard@footnote{@url{http://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard}}.
@vindex --output
@@ -8382,23 +6278,12 @@
standard@footnote{@url{http://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standar
@cindex CPU threads, number
@cindex Internal default value
@cindex Number of CPU threads to use
-The thing to have in mind is that none of the programs in Gnuastro keep any
-internal default value. All the values must either be stored in one of the
-configuration files or explicitly called in the command-line. In case the
-necessary parameters are not given through any of these methods, the
-program will print a missing option error and abort. The only exception to
-this is @option{--numthreads}, whose default value is determined at
-run-time using the number of threads available to your system, see
-@ref{Multi-threaded operations}. Of course, you can still provide a default
-value for the number of threads at any of the levels below, but if you
-don't, the program will not abort. Also note that through automatic output
-name generation, the value to the @option{--output} option is also not
-mandatory on the command-line or in the configuration files for all
-programs which don't rely on that value as an input@footnote{One example of
-a program which uses the value given to @option{--output} as an input is
-ConvertType, this value specifies the type of the output through the value
-to @option{--output}, see @ref{Invoking astconvertt}.}, see @ref{Automatic
-output}.
+The thing to have in mind is that none of the programs in Gnuastro keep any
internal default value.
+All the values must either be stored in one of the configuration files or
explicitly called in the command-line.
+In case the necessary parameters are not given through any of these methods,
the program will print a missing option error and abort.
+The only exception to this is @option{--numthreads}, whose default value is
determined at run-time using the number of threads available to your system,
see @ref{Multi-threaded operations}.
+Of course, you can still provide a default value for the number of threads at
any of the levels below, but if you don't, the program will not abort.
+Also note that through automatic output name generation, the value to the
@option{--output} option is also not mandatory on the command-line or in the
configuration files for all programs which don't rely on that value as an
input@footnote{One example of a program which uses the value given to
@option{--output} as an input is ConvertType, this value specifies the type of
the output through the value to @option{--output}, see @ref{Invoking
astconvertt}.}, see @ref{Automatic output}.
@@ -8413,45 +6298,30 @@ output}.
@subsection Configuration file format
@cindex Configuration file suffix
-The configuration files for each program have the standard program
-executable name with a `@file{.conf}' suffix. When you download the source
-code, you can find them in the same directory as the source code of each
-program, see @ref{Program source}.
+The configuration files for each program have the standard program executable
name with a `@file{.conf}' suffix.
+When you download the source code, you can find them in the same directory as
the source code of each program, see @ref{Program source}.
@cindex White space character
@cindex Configuration file format
-Any line in the configuration file whose first non-white character is a
-@key{#} is considered to be a comment and is ignored. An empty line is also
-similarly ignored. The long name of the option should be used as an
-identifier. The parameter name and parameter value have to be separated by
-any number of `white-space' characters: space, tab or vertical tab. By
-default several space characters are used. If the value of an option has
-space characters (most commonly for the @option{hdu} option), then the full
-value can be enclosed in double quotation signs (@key{"}, similar to the
-example in @ref{Arguments and options}). If it is an option without a value
-in the @option{--help} output (on/off option, see @ref{Options}), then the
-value should be @option{1} if it is to be `on' and @option{0} otherwise.
-
-In each non-commented and non-blank line, any text after the first two
-words (option identifier and value) is ignored. If an option identifier is
-not recognized in the configuration file, the name of the file, the line
-number of the unrecognized option, and the unrecognized identifier name
-will be reported and the program will abort. If a parameter is repeated
-more more than once in the configuration files, accepts only one value, and
-is not set on the command-line, then only the first value will be used, the
-rest will be ignored.
+Any line in the configuration file whose first non-white character is a
@key{#} is considered to be a comment and is ignored.
+An empty line is also similarly ignored.
+The long name of the option should be used as an identifier.
+The parameter name and parameter value have to be separated by any number of
`white-space' characters: space, tab or vertical tab.
+By default several space characters are used.
+If the value of an option has space characters (most commonly for the
@option{hdu} option), then the full value can be enclosed in double quotation
signs (@key{"}, similar to the example in @ref{Arguments and options}).
+If it is an option without a value in the @option{--help} output (on/off
option, see @ref{Options}), then the value should be @option{1} if it is to be
`on' and @option{0} otherwise.
+
+In each non-commented and non-blank line, any text after the first two words
(option identifier and value) is ignored.
+If an option identifier is not recognized in the configuration file, the name
of the file, the line number of the unrecognized option, and the unrecognized
identifier name will be reported and the program will abort.
+If a parameter is repeated more more than once in the configuration files,
accepts only one value, and is not set on the command-line, then only the first
value will be used, the rest will be ignored.
@cindex Writing configuration files
@cindex Automatic configuration file writing
@cindex Configuration files, writing
-You can build or edit any of the directories and the configuration files
-yourself using any text editor. However, it is recommended to use the
-@option{--setdirconf} and @option{--setusrconf} options to set default
-values for the current directory or this user, see @ref{Operating mode
-options}. With these options, the values you give will be checked before
-writing in the configuration file. They will also print a set of commented
-lines guiding the reader and will also classify the options based on their
-context and write them in their logical order to be more understandable.
+You can build or edit any of the directories and the configuration files
yourself using any text editor.
+However, it is recommended to use the @option{--setdirconf} and
@option{--setusrconf} options to set default values for the current directory
or this user, see @ref{Operating mode options}.
+With these options, the values you give will be checked before writing in the
configuration file.
+They will also print a set of commented lines guiding the reader and will also
classify the options based on their context and write them in their logical
order to be more understandable.
@node Configuration file precedence, Current directory and User wide,
Configuration file format, Configuration files
@@ -8460,105 +6330,68 @@ context and write them in their logical order to be
more understandable.
@cindex Configuration file precedence
@cindex Configuration file directories
@cindex Precedence, configuration files
-The option values in all the programs of Gnuastro will be filled in the
-following order. If an option only takes one value which is given in an
-earlier step, any value for that option in a later step will be
-ignored. Note that if the @option{lastconfig} option is specified in any
-step below, no other configuration files will be parsed (see @ref{Operating
-mode options}).
+The option values in all the programs of Gnuastro will be filled in the
following order.
+If an option only takes one value which is given in an earlier step, any value
for that option in a later step will be ignored.
+Note that if the @option{lastconfig} option is specified in any step below, no
other configuration files will be parsed (see @ref{Operating mode options}).
@enumerate
@item
Command-line options, for a particular run of ProgramName.
@item
-@file{.gnuastro/astprogname.conf} is parsed by ProgramName in the current
-directory.
+@file{.gnuastro/astprogname.conf} is parsed by ProgramName in the current
directory.
@item
-@file{.gnuastro/gnuastro.conf} is parsed by all Gnuastro programs in the
-current directory.
+@file{.gnuastro/gnuastro.conf} is parsed by all Gnuastro programs in the
current directory.
@item
-@file{$HOME/.local/etc/astprogname.conf} is parsed by ProgramName in the
-user's home directory (see @ref{Current directory and User wide}).
+@file{$HOME/.local/etc/astprogname.conf} is parsed by ProgramName in the
user's home directory (see @ref{Current directory and User wide}).
@item
-@file{$HOME/.local/etc/gnuastro.conf} is parsed by all Gnuastro programs in
-the user's home directory (see @ref{Current directory and User wide}).
+@file{$HOME/.local/etc/gnuastro.conf} is parsed by all Gnuastro programs in
the user's home directory (see @ref{Current directory and User wide}).
@item
-@file{prefix/etc/astprogname.conf} is parsed by ProgramName in the
-system-wide installation directory (see @ref{System wide} for
-@file{prefix}).
+@file{prefix/etc/astprogname.conf} is parsed by ProgramName in the system-wide
installation directory (see @ref{System wide} for @file{prefix}).
@item
-@file{prefix/etc/gnuastro.conf} is parsed by all Gnuastro programs in the
-system-wide installation directory (see @ref{System wide} for
-@file{prefix}).
+@file{prefix/etc/gnuastro.conf} is parsed by all Gnuastro programs in the
system-wide installation directory (see @ref{System wide} for @file{prefix}).
@end enumerate
-The basic idea behind setting this progressive state of checking for
-parameter values is that separate users of a computer or separate folders
-in a user's file system might need different values for some
-parameters.
+The basic idea behind setting this progressive state of checking for parameter
values is that separate users of a computer or separate folders in a user's
file system might need different values for some parameters.
@cartouche
@noindent
-@strong{Checking the order:} You can confirm/check the order of parsing
-configuration files using the @option{--checkconfig} option with any
-Gnuastro program, see @ref{Operating mode options}. Just be sure to place
-this option immediately after the program name, before any other option.
+@strong{Checking the order:}
+You can confirm/check the order of parsing configuration files using the
@option{--checkconfig} option with any Gnuastro program, see @ref{Operating
mode options}.
+Just be sure to place this option immediately after the program name, before
any other option.
@end cartouche
-As you see above, there can also be a configuration file containing the
-common options in all the programs: @file{gnuastro.conf} (see @ref{Common
-options}). If options specific to one program are specified in this file,
-there will be unrecognized option errors, or unexpected behavior if the
-option has different behavior in another program. On the other hand, there
-is no problem with @file{astprogname.conf} containing common
-options@footnote{As an example, the @option{--setdirconf} and
-@option{--setusrconf} options will also write the common options they have
-read in their produced @file{astprogname.conf}.}.
+As you see above, there can also be a configuration file containing the common
options in all the programs: @file{gnuastro.conf} (see @ref{Common options}).
+If options specific to one program are specified in this file, there will be
unrecognized option errors, or unexpected behavior if the option has different
behavior in another program.
+On the other hand, there is no problem with @file{astprogname.conf} containing
common options@footnote{As an example, the @option{--setdirconf} and
@option{--setusrconf} options will also write the common options they have read
in their produced @file{astprogname.conf}.}.
@cartouche
@noindent
-@strong{Manipulating the order:} You can manipulate this order or add new
-files with the following two options which are fully described in
+@strong{Manipulating the order:} You can manipulate this order or add new
files with the following two options which are fully described in
@ref{Operating mode options}:
@table @option
@item --config
-Allows you to define any file to be parsed as a configuration file on the
-command-line or within the any other configuration file. Recall that the
-file given to @option{--config} is parsed immediately when this option is
-confronted (on the command-line or in a configuration file).
+Allows you to define any file to be parsed as a configuration file on the
command-line or within the any other configuration file.
+Recall that the file given to @option{--config} is parsed immediately when
this option is confronted (on the command-line or in a configuration file).
@item --lastconfig
-Allows you to stop the parsing of subsequent configuration files. Note that
-if this option is given in a configuration file, it will be fully read, so
-its position in the configuration doesn't matter (unlike
-@option{--config}).
+Allows you to stop the parsing of subsequent configuration files.
+Note that if this option is given in a configuration file, it will be fully
read, so its position in the configuration doesn't matter (unlike
@option{--config}).
@end table
@end cartouche
+One example of benefiting from these configuration files can be this: raw
telescope images usually have their main image extension in the second FITS
extension, while processed FITS images usually only have one extension.
+If your system-wide default input extension is 0 (the first), then when you
want to work with the former group of data you have to explicitly mention it to
the programs every time.
+With this progressive state of default values to check, you can set different
default values for the different directories that you would like to run
Gnuastro in for your different purposes, so you won't have to worry about this
issue any more.
-One example of benefiting from these configuration files can be this: raw
-telescope images usually have their main image extension in the second FITS
-extension, while processed FITS images usually only have one extension. If
-your system-wide default input extension is 0 (the first), then when you
-want to work with the former group of data you have to explicitly mention
-it to the programs every time. With this progressive state of default
-values to check, you can set different default values for the different
-directories that you would like to run Gnuastro in for your different
-purposes, so you won't have to worry about this issue any more.
-
-The same can be said about the @file{gnuastro.conf} files: by specifying a
-behavior in this single file, all Gnuastro programs in the respective
-directory, user, or system-wide steps will behave similarly. For example to
-keep the input's directory when no specific output is given (see
-@ref{Automatic output}), or to not delete an existing file if it has the
-same name as a given output (see @ref{Input output options}).
+The same can be said about the @file{gnuastro.conf} files: by specifying a
behavior in this single file, all Gnuastro programs in the respective
directory, user, or system-wide steps will behave similarly.
+For example to keep the input's directory when no specific output is given
(see @ref{Automatic output}), or to not delete an existing file if it has the
same name as a given output (see @ref{Input output options}).
@node Current directory and User wide, System wide, Configuration file
precedence, Configuration files
@@ -8567,24 +6400,17 @@ same name as a given output (see @ref{Input output
options}).
@cindex @file{$HOME}
@cindex @file{./.gnuastro/}
@cindex @file{$HOME/.local/etc/}
-For the current (local) and user-wide directories, the configuration files
-are stored in the hidden sub-directories named @file{.gnuastro/} and
-@file{$HOME/.local/etc/} respectively. Unless you have changed it, the
-@file{$HOME} environment variable should point to your home directory. You
-can check it by running @command{$ echo $HOME}. Each time you run any of
-the programs in Gnuastro, this environment variable is read and placed in
-the above address. So if you suddenly see that your home configuration
-files are not being read, probably you (or some other program) has changed
-the value of this environment variable.
+For the current (local) and user-wide directories, the configuration files are
stored in the hidden sub-directories named @file{.gnuastro/} and
@file{$HOME/.local/etc/} respectively.
+Unless you have changed it, the @file{$HOME} environment variable should point
to your home directory.
+You can check it by running @command{$ echo $HOME}.
+Each time you run any of the programs in Gnuastro, this environment variable
is read and placed in the above address.
+So if you suddenly see that your home configuration files are not being read,
probably you (or some other program) has changed the value of this environment
variable.
@vindex --setdirconf
@vindex --setusrconf
-Although it might cause confusions like above, this dependence on the
-@file{HOME} environment variable enables you to temporarily use a different
-directory as your home directory. This can come in handy in complicated
-situations. To set the user or current directory configuration files based
-on your command-line input, you can use the @option{--setdirconf} or
-@option{--setusrconf}, see @ref{Operating mode options}.
+Although it might cause confusions like above, this dependence on the
@file{HOME} environment variable enables you to temporarily use a different
directory as your home directory.
+This can come in handy in complicated situations.
+To set the user or current directory configuration files based on your
command-line input, you can use the @option{--setdirconf} or
@option{--setusrconf}, see @ref{Operating mode options}.
@@ -8594,28 +6420,16 @@ on your command-line input, you can use the
@option{--setdirconf} or
@cindex @file{prefix/etc/}
@cindex System wide configuration files
@cindex Configuration files, system wide
-When Gnuastro is installed, the configuration files that are shipped with
-the distribution are copied into the (possibly system wide)
-@file{prefix/etc/} directory. For more details on @file{prefix}, see
-@ref{Installation directory} (by default it is: @file{/usr/local}). This
-directory is the final place (with the lowest priority) that the programs
-in Gnuastro will check to retrieve parameter values.
+When Gnuastro is installed, the configuration files that are shipped with the
distribution are copied into the (possibly system wide) @file{prefix/etc/}
directory.
+For more details on @file{prefix}, see @ref{Installation directory} (by
default it is: @file{/usr/local}).
+This directory is the final place (with the lowest priority) that the programs
in Gnuastro will check to retrieve parameter values.
-If you remove an option and its value from the system wide configuration
-files, you either have to specify it in more immediate configuration files
-or set it each time in the command-line. Recall that none of the programs
-in Gnuastro keep any internal default values and will abort if they don't
-find a value for the necessary parameters (except the number of threads and
-output file name). So even though you might never expect to use an optional
-option, it safe to have it available in this system-wide configuration file
-even if you don't intend to use it frequently.
+If you remove an option and its value from the system wide configuration
files, you either have to specify it in more immediate configuration files or
set it each time in the command-line.
+Recall that none of the programs in Gnuastro keep any internal default values
and will abort if they don't find a value for the necessary parameters (except
the number of threads and output file name).
+So even though you might never expect to use an optional option, it safe to
have it available in this system-wide configuration file even if you don't
intend to use it frequently.
-Note that in case you install Gnuastro from your distribution's
-repositories, @file{prefix} will either be set to @file{/} (the root
-directory) or @file{/usr}, so you can find the system wide configuration
-variables in @file{/etc/} or @file{/usr/etc/}. The prefix of
-@file{/usr/local/} is conventionally used for programs you install from
-source by your self as in @ref{Quick start}.
+Note that in case you install Gnuastro from your distribution's repositories,
@file{prefix} will either be set to @file{/} (the root directory) or
@file{/usr}, so you can find the system wide configuration variables in
@file{/etc/} or @file{/usr/etc/}.
+The prefix of @file{/usr/local/} is conventionally used for programs you
install from source by your self as in @ref{Quick start}.
@@ -8633,43 +6447,30 @@ source by your self as in @ref{Quick start}.
@cindex Book formats
@cindex Remembering options
@cindex Convenient book formats
-Probably the first time you read this book, it is either in the PDF
-or HTML formats. These two formats are very convenient for when you
-are not actually working, but when you are only reading. Later on,
-when you start to use the programs and you are deep in the middle of
-your work, some of the details will inevitably be forgotten. Going to
-find the PDF file (printed or digital) or the HTML webpage is a major
-distraction.
+Probably the first time you read this book, it is either in the PDF or HTML
formats.
+These two formats are very convenient for when you are not actually working,
but when you are only reading.
+Later on, when you start to use the programs and you are deep in the middle of
your work, some of the details will inevitably be forgotten.
+Going to find the PDF file (printed or digital) or the HTML webpage is a major
distraction.
@cindex Online help
@cindex Command-line help
-GNU software have a very unique set of tools for aiding your memory on
-the command-line, where you are working, depending how much of it you
-need to remember. In the past, such command-line help was known as
-``online'' help, because they were literally provided to you `on'
-the command `line'. However, nowadays the word ``online'' refers to
-something on the internet, so that term will not be used. With this
-type of help, you can resume your exciting research without taking
-your hands off the keyboard.
+GNU software have a very unique set of tools for aiding your memory on the
command-line, where you are working, depending how much of it you need to
remember.
+In the past, such command-line help was known as ``online'' help, because they
were literally provided to you `on' the command `line'.
+However, nowadays the word ``online'' refers to something on the internet, so
that term will not be used.
+With this type of help, you can resume your exciting research without taking
your hands off the keyboard.
@cindex Installed help methods
-Another major advantage of such command-line based help routines is
-that they are installed with the software in your computer, therefore
-they are always in sync with the executable you are actually
-running. Three of them are actually part of the executable. You don't
-have to worry about the version of the book or program. If you rely
-on external help (a PDF in your personal print or digital archive or
-HTML from the official webpage) you have to check to see if their
-versions fit with your installed program.
-
-If you only need to remember the short or long names of the options,
-@option{--usage} is advised. If it is what the options do, then
-@option{--help} is a great tool. Man pages are also provided for those
-who are use to this older system of documentation. This full book is
-also available to you on the command-line in Info format. If none of
-these seems to resolve the problems, there is a mailing list which
-enables you to get in touch with experienced Gnuastro users. In the
-subsections below each of these methods are reviewed.
+Another major advantage of such command-line based help routines is that they
are installed with the software in your computer, therefore they are always in
sync with the executable you are actually running.
+Three of them are actually part of the executable.
+You don't have to worry about the version of the book or program.
+If you rely on external help (a PDF in your personal print or digital archive
or HTML from the official webpage) you have to check to see if their versions
fit with your installed program.
+
+If you only need to remember the short or long names of the options,
@option{--usage} is advised.
+If it is what the options do, then @option{--help} is a great tool.
+Man pages are also provided for those who are use to this older system of
documentation.
+This full book is also available to you on the command-line in Info format.
+If none of these seems to resolve the problems, there is a mailing list which
enables you to get in touch with experienced Gnuastro users.
+In the subsections below each of these methods are reviewed.
@menu
@@ -8686,10 +6487,10 @@ subsections below each of these methods are reviewed.
@cindex Usage pattern
@cindex Mandatory arguments
@cindex Optional and mandatory tokens
-If you give this option, the program will not run. It will only print a
-very concise message showing the options and arguments. Everything within
-square brackets (@option{[]}) is optional. For example here are the first
-and last two lines of Crop's @option{--usage} is shown:
+If you give this option, the program will not run.
+It will only print a very concise message showing the options and arguments.
+Everything within square brackets (@option{[]}) is optional.
+For example here are the first and last two lines of Crop's @option{--usage}
is shown:
@example
$ astcrop --usage
@@ -8700,77 +6501,65 @@ Usage: astcrop [-Do?IPqSVW] [-d INT] [-h INT] [-r INT]
[-w INT]
[ASCIIcatalog] FITSimage(s).fits
@end example
-There are no explanations on the options, just their short and long
-names shown separately. After the program name, the short format of
-all the options that don't require a value (on/off options) is
-displayed. Those that do require a value then follow in separate
-brackets, each displaying the format of the input they want, see
-@ref{Options}. Since all options are optional, they are shown in
-square brackets, but arguments can also be optional. For example in
-this example, a catalog name is optional and is only required in some
-modes. This is a standard method of displaying optional arguments for
-all GNU software.
+There are no explanations on the options, just their short and long names
shown separately.
+After the program name, the short format of all the options that don't require
a value (on/off options) is displayed.
+Those that do require a value then follow in separate brackets, each
displaying the format of the input they want, see @ref{Options}.
+Since all options are optional, they are shown in square brackets, but
arguments can also be optional.
+For example in this example, a catalog name is optional and is only required
in some modes.
+This is a standard method of displaying optional arguments for all GNU
software.
@node --help, Man pages, --usage, Getting help
@subsection @option{--help}
@vindex --help
-If the command-line includes this option, the program will not be
-run. It will print a complete list of all available options along with
-a short explanation. The options are also grouped by their
-context. Within each context, the options are sorted
-alphabetically. Since the options are shown in detail afterwards, the
-first line of the @option{--help} output shows the arguments and if
-they are optional or not, similar to @ref{--usage}.
-
-In the @option{--help} output of all programs in Gnuastro, the
-options for each program are classified based on context. The first
-two contexts are always options to do with the input and output
-respectively. For example input image extensions or supplementary
-input files for the inputs. The last class of options is also fixed in
-all of Gnuastro, it shows operating mode options. Most of these
-options are already explained in @ref{Operating mode options}.
+If the command-line includes this option, the program will not be run.
+It will print a complete list of all available options along with a short
explanation.
+The options are also grouped by their context.
+Within each context, the options are sorted alphabetically.
+Since the options are shown in detail afterwards, the first line of the
@option{--help} output shows the arguments and if they are optional or not,
similar to @ref{--usage}.
+
+In the @option{--help} output of all programs in Gnuastro, the options for
each program are classified based on context.
+The first two contexts are always options to do with the input and output
respectively.
+For example input image extensions or supplementary input files for the inputs.
+The last class of options is also fixed in all of Gnuastro, it shows operating
mode options.
+Most of these options are already explained in @ref{Operating mode options}.
@cindex Long outputs
@cindex Redirection of output
@cindex Command-line, long outputs
-The help message will sometimes be longer than the vertical size of
-your terminal. If you are using a graphical user interface terminal
-emulator, you can scroll the terminal with your mouse, but we promised
-no mice distractions! So here are some suggestions:
+The help message will sometimes be longer than the vertical size of your
terminal.
+If you are using a graphical user interface terminal emulator, you can scroll
the terminal with your mouse, but we promised no mice distractions! So here are
some suggestions:
@itemize
@item
@cindex Scroll command-line
@cindex Command-line scroll
@cindex @key{Shift + PageUP} and @key{Shift + PageDown}
-@key{Shift + PageUP} to scroll up and @key{Shift + PageDown} to scroll
-down. For most help output this should be enough. The problem is that
-it is limited by the number of lines that your terminal keeps in
-memory and that you can't scroll by lines, only by whole screens.
+@key{Shift + PageUP} to scroll up and @key{Shift + PageDown} to scroll down.
+For most help output this should be enough.
+The problem is that it is limited by the number of lines that your terminal
keeps in memory and that you can't scroll by lines, only by whole screens.
@item
@cindex Pipe
@cindex @command{less}
-Pipe to @command{less}. A pipe is a form of shell re-direction. The
-@command{less} tool in Unix-like systems was made exactly for such
-outputs of any length. You can pipe (@command{|}) the output of any
-program that is longer than the screen to it and then you can scroll
-through (up and down) with its many tools. For example:
+Pipe to @command{less}.
+A pipe is a form of shell re-direction.
+The @command{less} tool in Unix-like systems was made exactly for such outputs
of any length.
+You can pipe (@command{|}) the output of any program that is longer than the
screen to it and then you can scroll through (up and down) with its many tools.
+For example:
@example
$ astnoisechisel --help | less
@end example
@noindent
-Once you have gone through the text, you can quit @command{less} by
-pressing the @key{q} key.
+Once you have gone through the text, you can quit @command{less} by pressing
the @key{q} key.
@item
@cindex Save output to file
@cindex Redirection of output
-Redirect to a file. This is a less convenient way, because you will
-then have to open the file in a text editor! You can do this with the
-shell redirection tool (@command{>}):
+Redirect to a file.
+This is a less convenient way, because you will then have to open the file in
a text editor!
+You can do this with the shell redirection tool (@command{>}):
@example
$ astnoisechisel --help > filename.txt
@end example
@@ -8779,10 +6568,9 @@ $ astnoisechisel --help > filename.txt
@cindex GNU Grep
@cindex Searching text
@cindex Command-line searching text
-In case you have a special keyword you are looking for in the help, you
-don't have to go through the full list. GNU Grep is made for this job. For
-example if you only want the list of options whose @option{--help} output
-contains the word ``axis'' in Crop, you can run the following command:
+In case you have a special keyword you are looking for in the help, you don't
have to go through the full list.
+GNU Grep is made for this job.
+For example if you only want the list of options whose @option{--help} output
contains the word ``axis'' in Crop, you can run the following command:
@example
$ astcrop --help | grep axis
@@ -8792,42 +6580,28 @@ $ astcrop --help | grep axis
@cindex Argp argument parser
@cindex Customize @option{--help} output
@cindex @option{--help} output customization
-If the output of this option does not fit nicely within the confines
-of your terminal, GNU does enable you to customize its output through
-the environment variable @code{ARGP_HELP_FMT}, you can set various
-parameters which specify the formatting of the help messages. For
-example if your terminals are wider than 70 spaces (say 100) and you
-feel there is too much empty space between the long options and the
-short explanation, you can change these formats by giving values to
-this environment variable before running the program with the
-@option{--help} output. You can define this environment variable in
-this manner:
+If the output of this option does not fit nicely within the confines of your
terminal, GNU does enable you to customize its output through the environment
variable @code{ARGP_HELP_FMT}, you can set various parameters which specify the
formatting of the help messages.
+For example if your terminals are wider than 70 spaces (say 100) and you feel
there is too much empty space between the long options and the short
explanation, you can change these formats by giving values to this environment
variable before running the program with the @option{--help} output.
+You can define this environment variable in this manner:
@example
$ export ARGP_HELP_FMT=rmargin=100,opt-doc-col=20
@end example
@cindex @file{.bashrc}
-This will affect all GNU programs using GNU C library's @file{argp.h}
-facilities as long as the environment variable is in memory. You can
-see the full list of these formatting parameters in the ``Argp User
-Customization'' part of the GNU C library manual. If you are more
-comfortable to read the @option{--help} outputs of all GNU software in
-your customized format, you can add your customization (similar to
-the line above, without the @command{$} sign) to your @file{~/.bashrc}
-file. This is a standard option for all GNU software.
+This will affect all GNU programs using GNU C library's @file{argp.h}
facilities as long as the environment variable is in memory.
+You can see the full list of these formatting parameters in the ``Argp User
Customization'' part of the GNU C library manual.
+If you are more comfortable to read the @option{--help} outputs of all GNU
software in your customized format, you can add your customization (similar to
the line above, without the @command{$} sign) to your @file{~/.bashrc} file.
+This is a standard option for all GNU software.
@node Man pages, Info, --help, Getting help
@subsection Man pages
@cindex Man pages
-Man pages were the Unix method of providing command-line documentation
-to a program. With GNU Info, see @ref{Info} the usage of this method
-of documentation is highly discouraged. This is because Info provides
-a much more easier to navigate and read environment.
+Man pages were the Unix method of providing command-line documentation to a
program.
+With GNU Info, see @ref{Info} the usage of this method of documentation is
highly discouraged.
+This is because Info provides a much more easier to navigate and read
environment.
-However, some operating systems require a man page for packages that
-are installed and some people are still used to this method of command
-line help. So the programs in Gnuastro also have Man pages which are
-automatically generated from the outputs of @option{--version} and
-@option{--help} using the GNU help2man program. So if you run
+However, some operating systems require a man page for packages that are
installed and some people are still used to this method of command line help.
+So the programs in Gnuastro also have Man pages which are automatically
generated from the outputs of @option{--version} and @option{--help} using the
GNU help2man program.
+So if you run
@example
$ man programname
@end example
@@ -8844,19 +6618,12 @@ standard manner.
@cindex GNU Info
@cindex Command-line, viewing full book
-Info is the standard documentation format for all GNU software. It is
-a very useful command-line document viewing format, fully equipped
-with links between the various pages and menus and search
-capabilities. As explained before, the best thing about it is that it
-is available for you the moment you need to refresh your memory on any
-command-line tool in the middle of your work without having to take
-your hands off the keyboard. This complete book is available in Info
-format and can be accessed from anywhere on the command-line.
+Info is the standard documentation format for all GNU software.
+It is a very useful command-line document viewing format, fully equipped with
links between the various pages and menus and search capabilities.
+As explained before, the best thing about it is that it is available for you
the moment you need to refresh your memory on any command-line tool in the
middle of your work without having to take your hands off the keyboard.
+This complete book is available in Info format and can be accessed from
anywhere on the command-line.
-To open the Info format of any installed programs or library on your
-system which has an Info format book, you can simply run the command
-below (change @command{executablename} to the executable name of the
-program or library):
+To open the Info format of any installed programs or library on your system
which has an Info format book, you can simply run the command below (change
@command{executablename} to the executable name of the program or library):
@example
$ info executablename
@@ -8865,23 +6632,16 @@ $ info executablename
@noindent
@cindex Learning GNU Info
@cindex GNU software documentation
-In case you are not already familiar with it, run @command{$ info
-info}. It does a fantastic job in explaining all its capabilities its
-self. It is very short and you will become sufficiently fluent in
-about half an hour. Since all GNU software documentation is also
-provided in Info, your whole GNU/Linux life will significantly
-improve.
+In case you are not already familiar with it, run @command{$ info info}.
+It does a fantastic job in explaining all its capabilities its self.
+It is very short and you will become sufficiently fluent in about half an hour.
+Since all GNU software documentation is also provided in Info, your whole
GNU/Linux life will significantly improve.
@cindex GNU Emacs
@cindex GNU C library
-Once you've become an efficient navigator in Info, you can go to any
-part of this book or any other GNU software or library manual, no
-matter how long it is, in a matter of seconds. It also blends nicely
-with GNU Emacs (a text editor) and you can search manuals while you
-are writing your document or programs without taking your hands off
-the keyboard, this is most useful for libraries like the GNU C
-library. To be able to access all the Info manuals installed in your
-GNU/Linux within Emacs, type @key{Ctrl-H + i}.
+Once you've become an efficient navigator in Info, you can go to any part of
this book or any other GNU software or library manual, no matter how long it
is, in a matter of seconds.
+It also blends nicely with GNU Emacs (a text editor) and you can search
manuals while you are writing your document or programs without taking your
hands off the keyboard, this is most useful for libraries like the GNU C
library.
+To be able to access all the Info manuals installed in your GNU/Linux within
Emacs, type @key{Ctrl-H + i}.
To see this whole book from the beginning in Info, you can run
@@ -8898,18 +6658,16 @@ $ info astprogramname
@end example
@noindent
-you will be taken to the section titled ``Invoking ProgramName'' which
-explains the inputs and outputs along with the command-line options for
-that program. Finally, if you run Info with the official program name, for
-example Crop or NoiseChisel:
+you will be taken to the section titled ``Invoking ProgramName'' which
explains the inputs and outputs along with the command-line options for that
program.
+Finally, if you run Info with the official program name, for example Crop or
NoiseChisel:
@example
$ info ProgramName
@end example
@noindent
-you will be taken to the top section which introduces the
-program. Note that in all cases, Info is not case sensitive.
+you will be taken to the top section which introduces the program.
+Note that in all cases, Info is not case sensitive.
@@ -8918,23 +6676,17 @@ program. Note that in all cases, Info is not case
sensitive.
@cindex help-gnuastro mailing list
@cindex Mailing list: help-gnuastro
-Gnuastro maintains the help-gnuastro mailing list for users to ask any
-questions related to Gnuastro. The experienced Gnuastro users and some
-of its developers are subscribed to this mailing list and your email
-will be sent to them immediately. However, when contacting this
-mailing list please have in mind that they are possibly very busy and
-might not be able to answer immediately.
+Gnuastro maintains the help-gnuastro mailing list for users to ask any
questions related to Gnuastro.
+The experienced Gnuastro users and some of its developers are subscribed to
this mailing list and your email will be sent to them immediately.
+However, when contacting this mailing list please have in mind that they are
possibly very busy and might not be able to answer immediately.
@cindex Mailing list archives
@cindex @code{help-gnuastro@@gnu.org}
-To ask a question from this mailing list, send a mail to
-@code{help-gnuastro@@gnu.org}. Anyone can view the mailing list
-archives at @url{http://lists.gnu.org/archive/html/help-gnuastro/}. It
-is best that before sending a mail, you search the archives to see if
-anyone has asked a question similar to yours. If you want to make a
-suggestion or report a bug, please don't send a mail to this mailing
-list. We have other mailing lists and tools for those purposes, see
-@ref{Report a bug} or @ref{Suggest new feature}.
+To ask a question from this mailing list, send a mail to
@code{help-gnuastro@@gnu.org}.
+Anyone can view the mailing list archives at
@url{http://lists.gnu.org/archive/html/help-gnuastro/}.
+It is best that before sending a mail, you search the archives to see if
anyone has asked a question similar to yours.
+If you want to make a suggestion or report a bug, please don't send a mail to
this mailing list.
+We have other mailing lists and tools for those purposes, see @ref{Report a
bug} or @ref{Suggest new feature}.
@@ -8948,80 +6700,54 @@ list. We have other mailing lists and tools for those
purposes, see
@node Installed scripts, Multi-threaded operations, Getting help, Common
program behavior
@section Installed scripts
-Gnuastro's programs (introduced in previous chapters) are designed to be
-highly modular and thus mainly contain lower-level operations on the
-data. However, in many contexts, higher-level operations (for example a
-sequence of calls to multiple Gnuastro programs, or a special way of
-running a program and using the outputs) are also very similar between
-various projects.
+Gnuastro's programs (introduced in previous chapters) are designed to be
highly modular and thus mainly contain lower-level operations on the data.
+However, in many contexts, higher-level operations (for example a sequence of
calls to multiple Gnuastro programs, or a special way of running a program and
using the outputs) are also very similar between various projects.
-To facilitate data analysis on these higher-level steps also, Gnuastro also
-installs some scripts on your system with the (@code{astscript-}) prefix
-(in contrast to the other programs that only have the @code{ast}
-prefix).
+To facilitate data analysis on these higher-level steps also, Gnuastro also
installs some scripts on your system with the (@code{astscript-}) prefix (in
contrast to the other programs that only have the @code{ast} prefix).
@cindex GNU Bash
-Like all of Gnuastro's source code, these scripts are also heavily
-commented. They are written in GNU Bash, which doesn't need
-compilation. Therefore, if you open the installed scripts in a text editor,
-you can actually read them@footnote{Gnuastro's installed programs (those
-only starting with @code{ast}) aren't human-readable. They are written in C
-and are thus compiled (optimized in binary CPU instructions that will be
-given directly to your CPU). Because they don't need an interpreter like
-Bash on every run, they are much faster and more independent than
-scripts. To read the source code of the programs, look into the
-@file{bin/progname} directory of Gnuastro's source (@ref{Downloading the
-source}). If you would like to read more about why C was chosen for the
-programs, please see @ref{Why C}.}. Bash is the same language that is
-mainly used when typing on the command-line. Because of these factors, Bash
-is much more widely known and used than C (the language of other Gnuastro
-programs). Gnuastro's installed scripts also do higher-level operations, so
-customizing these scripts for a special project will be more common than
-the programs. You can always inspect them (to customize, check, or educate
-your self) with this command (just replace @code{emacs} with your favorite
-text editor):
+Like all of Gnuastro's source code, these scripts are also heavily commented.
+They are written in GNU Bash, which doesn't need compilation.
+Therefore, if you open the installed scripts in a text editor, you can
actually read them@footnote{Gnuastro's installed programs (those only starting
with @code{ast}) aren't human-readable.
+They are written in C and are thus compiled (optimized in binary CPU
instructions that will be given directly to your CPU).
+Because they don't need an interpreter like Bash on every run, they are much
faster and more independent than scripts.
+To read the source code of the programs, look into the @file{bin/progname}
directory of Gnuastro's source (@ref{Downloading the source}).
+If you would like to read more about why C was chosen for the programs, please
see @ref{Why C}.}.
+Bash is the same language that is mainly used when typing on the command-line.
+Because of these factors, Bash is much more widely known and used than C (the
language of other Gnuastro programs).
+Gnuastro's installed scripts also do higher-level operations, so customizing
these scripts for a special project will be more common than the programs.
+You can always inspect them (to customize, check, or educate your self) with
this command (just replace @code{emacs} with your favorite text editor):
@example
$ emacs $(which astscript-NAME)
@end example
-These scripts also accept options and are in many ways similar to the
-programs (see @ref{Common options}) with some minor differences:
+These scripts also accept options and are in many ways similar to the programs
(see @ref{Common options}) with some minor differences:
@itemize
@item
-Currently they don't accept configuration files themselves. However, the
-configuration files of the Gnuastro programs they call are indeed parsed
-and used by those programs.
+Currently they don't accept configuration files themselves.
+However, the configuration files of the Gnuastro programs they call are indeed
parsed and used by those programs.
-As a result, they don't have the following options: @option{--checkconfig},
-@option{--config}, @option{--lastconfig}, @option{--onlyversion},
-@option{--printparams}, @option{--setdirconf} and @option{--setusrconf}.
+As a result, they don't have the following options: @option{--checkconfig},
@option{--config}, @option{--lastconfig}, @option{--onlyversion},
@option{--printparams}, @option{--setdirconf} and @option{--setusrconf}.
@item
-They don't directly allocate any memory, so there is no
-@option{--minmapsize}.
+They don't directly allocate any memory, so there is no @option{--minmapsize}.
@item
-They don't have an independent @option{--usage} option: when called with
-@option{--usage}, they just recommend running @option{--help}.
+They don't have an independent @option{--usage} option: when called with
@option{--usage}, they just recommend running @option{--help}.
@item
-The output of @option{--help} is not configurable like the programs (see
-@ref{--help}).
+The output of @option{--help} is not configurable like the programs (see
@ref{--help}).
@item
@cindex GNU AWK
@cindex GNU SED
-The scripts will commonly use your installed Bash and other basic
-command-line tools (for example AWK or SED). Different systems have
-different versions and implementations of these basic tools (for example
-GNU/Linux systems use GNU AWK and GNU SED which are far more advanced and
-up to date then the minimalist AWK and SED of most other
-systems). Therefore, unexpected errors in these tools might come up when
-you run these scripts. We will try our best to write these scripts in a
-portable way. However, if you do confront such strange errors, please
-submit a bug report so we fix it (see @ref{Report a bug}).
+The scripts will commonly use your installed Bash and other basic command-line
tools (for example AWK or SED).
+Different systems have different versions and implementations of these basic
tools (for example GNU/Linux systems use GNU AWK and GNU SED which are far more
advanced and up to date then the minimalist AWK and SED of most other systems).
+Therefore, unexpected errors in these tools might come up when you run these
scripts.
+We will try our best to write these scripts in a portable way.
+However, if you do confront such strange errors, please submit a bug report so
we fix it (see @ref{Report a bug}).
@end itemize
@@ -9046,26 +6772,18 @@ submit a bug report so we fix it (see @ref{Report a
bug}).
@cindex Multi-threaded programs
@cindex Using multiple CPU cores
@cindex Simultaneous multithreading
-Some of the programs benefit significantly when you use all the threads
-your computer's CPU has to offer to your operating system. The number of
-threads available can be larger than the number of physical (hardware)
-cores in the CPU (also known as Simultaneous multithreading). For example,
-in Intel's CPUs (those that implement its Hyper-threading technology) the
-number of threads is usually double the number of physical cores in your
-CPU. On a GNU/Linux system, the number of threads available can be found
-with the command @command{$ nproc} command (part of GNU Coreutils).
+Some of the programs benefit significantly when you use all the threads your
computer's CPU has to offer to your operating system.
+The number of threads available can be larger than the number of physical
(hardware) cores in the CPU (also known as Simultaneous multithreading).
+For example, in Intel's CPUs (those that implement its Hyper-threading
technology) the number of threads is usually double the number of physical
cores in your CPU.
+On a GNU/Linux system, the number of threads available can be found with the
command @command{$ nproc} command (part of GNU Coreutils).
@vindex --numthreads
@cindex Number of threads available
@cindex Available number of threads
@cindex Internally stored option value
-Gnuastro's programs can find the number of threads available to your system
-internally at run-time (when you execute the program). However, if a value
-is given to the @option{--numthreads} option, the given number will be
-used, see @ref{Operating mode options} and @ref{Configuration files} for ways
to
-use this option. Thus @option{--numthreads} is the only common option in
-Gnuastro's programs with a value that doesn't have to be specified anywhere
-on the command-line or in the configuration files.
+Gnuastro's programs can find the number of threads available to your system
internally at run-time (when you execute the program).
+However, if a value is given to the @option{--numthreads} option, the given
number will be used, see @ref{Operating mode options} and @ref{Configuration
files} for ways to use this option.
+Thus @option{--numthreads} is the only common option in Gnuastro's programs
with a value that doesn't have to be specified anywhere on the command-line or
in the configuration files.
@menu
* A note on threads:: Caution and suggestion on using threads.
@@ -9078,38 +6796,23 @@ on the command-line or in the configuration files.
@cindex Using multiple threads
@cindex Best use of CPU threads
@cindex Efficient use of CPU threads
-Spinning off threads is not necessarily the most efficient way to run an
-application. Creating a new thread isn't a cheap operation for the
-operating system. It is most useful when the input data are fixed and you
-want the same operation to be done on parts of it. For example one input
-image to Crop and multiple crops from various parts of it. In this fashion,
-the image is loaded into memory once, all the crops are divided between the
-number of threads internally and each thread cuts out those parts which are
-assigned to it from the same image. On the other hand, if you have multiple
-images and you want to crop the same region(s) out of all of them, it is
-much more efficient to set @option{--numthreads=1} (so no threads spin off)
-and run Crop multiple times simultaneously, see @ref{How to run
-simultaneous operations}.
+Spinning off threads is not necessarily the most efficient way to run an
application.
+Creating a new thread isn't a cheap operation for the operating system.
+It is most useful when the input data are fixed and you want the same
operation to be done on parts of it.
+For example one input image to Crop and multiple crops from various parts of
it.
+In this fashion, the image is loaded into memory once, all the crops are
divided between the number of threads internally and each thread cuts out those
parts which are assigned to it from the same image.
+On the other hand, if you have multiple images and you want to crop the same
region(s) out of all of them, it is much more efficient to set
@option{--numthreads=1} (so no threads spin off) and run Crop multiple times
simultaneously, see @ref{How to run simultaneous operations}.
@cindex Wall-clock time
-You can check the boost in speed by first running a program on one of the
-data sets with the maximum number of threads and another time (with
-everything else the same) and only using one thread. You will notice that
-the wall-clock time (reported by most programs at their end) in the former
-is longer than the latter divided by number of physical CPU cores (not
-threads) available to your operating system. Asymptotically these two times
-can be equal (most of the time they aren't). So limiting the programs to
-use only one thread and running them independently on the number of
-available threads will be more efficient.
+You can check the boost in speed by first running a program on one of the data
sets with the maximum number of threads and another time (with everything else
the same) and only using one thread.
+You will notice that the wall-clock time (reported by most programs at their
end) in the former is longer than the latter divided by number of physical CPU
cores (not threads) available to your operating system.
+Asymptotically these two times can be equal (most of the time they aren't).
+So limiting the programs to use only one thread and running them independently
on the number of available threads will be more efficient.
@cindex System Cache
@cindex Cache, system
-Note that the operating system keeps a cache of recently processed
-data, so usually, the second time you process an identical data set
-(independent of the number of threads used), you will get faster
-results. In order to make an unbiased comparison, you have to first
-clean the system's cache with the following command between the two
-runs.
+Note that the operating system keeps a cache of recently processed data, so
usually, the second time you process an identical data set (independent of the
number of threads used), you will get faster results.
+In order to make an unbiased comparison, you have to first clean the system's
cache with the following command between the two runs.
@example
$ sync; echo 3 | sudo tee /proc/sys/vm/drop_caches
@@ -9120,14 +6823,10 @@ $ sync; echo 3 | sudo tee /proc/sys/vm/drop_caches
@strong{SUMMARY: Should I use multiple threads?} Depends:
@itemize
@item
-If you only have @strong{one} data set (image in most cases!), then
-yes, the more threads you use (with a maximum of the number of threads
-available to your OS) the faster you will get your results.
+If you only have @strong{one} data set (image in most cases!), then yes, the
more threads you use (with a maximum of the number of threads available to your
OS) the faster you will get your results.
@item
-If you want to run the same operation on @strong{multiple} data sets, it is
-best to set the number of threads to 1 and use Make, or GNU Parallel, as
-explained in @ref{How to run simultaneous operations}.
+If you want to run the same operation on @strong{multiple} data sets, it is
best to set the number of threads to 1 and use Make, or GNU Parallel, as
explained in @ref{How to run simultaneous operations}.
@end itemize
@end cartouche
@@ -9138,37 +6837,25 @@ explained in @ref{How to run simultaneous operations}.
@node How to run simultaneous operations, , A note on threads, Multi-threaded
operations
@subsection How to run simultaneous operations
-There are two@footnote{A third way would be to open multiple terminal
-emulator windows in your GUI, type the commands separately on each and
-press @key{Enter} once on each terminal, but this is far too frustrating,
-tedious and prone to errors. It's therefore not a realistic solution when
-tens, hundreds or thousands of operations (your research targets,
-multiplied by the operations you do on each) are to be done.} approaches to
-simultaneously execute a program: using GNU Parallel or Make (GNU Make is
-the most common implementation). The first is very useful when you only
-want to do one job multiple times and want to get back to your work without
-actually keeping the command you ran. The second is usually for more
-important operations, with lots of dependencies between the different
-products (for example a full scientific research).
+There are two@footnote{A third way would be to open multiple terminal emulator
windows in your GUI, type the commands separately on each and press @key{Enter}
once on each terminal, but this is far too frustrating, tedious and prone to
errors.
+It's therefore not a realistic solution when tens, hundreds or thousands of
operations (your research targets, multiplied by the operations you do on each)
are to be done.} approaches to simultaneously execute a program: using GNU
Parallel or Make (GNU Make is the most common implementation).
+The first is very useful when you only want to do one job multiple times and
want to get back to your work without actually keeping the command you ran.
+The second is usually for more important operations, with lots of dependencies
between the different products (for example a full scientific research).
@table @asis
@item GNU Parallel
@cindex GNU Parallel
-When you only want to run multiple instances of a command on different
-threads and get on with the rest of your work, the best method is to
-use GNU parallel. Surprisingly GNU Parallel is one of the few GNU
-packages that has no Info documentation but only a Man page, see
-@ref{Info}. So to see the documentation after installing it please run
+When you only want to run multiple instances of a command on different threads
and get on with the rest of your work, the best method is to use GNU parallel.
+Surprisingly GNU Parallel is one of the few GNU packages that has no Info
documentation but only a Man page, see @ref{Info}.
+So to see the documentation after installing it please run
@example
$ man parallel
@end example
@noindent
-As an example, let's assume we want to crop a region fixed on the
-pixels (500, 600) with the default width from all the FITS images in
-the @file{./data} directory ending with @file{sci.fits} to the current
-directory. To do this, you can run:
+As an example, let's assume we want to crop a region fixed on the pixels (500,
600) with the default width from all the FITS images in the @file{./data}
directory ending with @file{sci.fits} to the current directory.
+To do this, you can run:
@example
$ parallel astcrop --numthreads=1 --xc=500 --yc=600 ::: \
@@ -9176,62 +6863,38 @@ $ parallel astcrop --numthreads=1 --xc=500 --yc=600 :::
\
@end example
@noindent
-GNU Parallel can help in many more conditions, this is one of the
-simplest, see the man page for lots of other examples. For absolute
-beginners: the backslash (@command{\}) is only a line breaker to fit
-nicely in the page. If you type the whole command in one line, you
-should remove it.
+GNU Parallel can help in many more conditions, this is one of the simplest,
see the man page for lots of other examples.
+For absolute beginners: the backslash (@command{\}) is only a line breaker to
fit nicely in the page.
+If you type the whole command in one line, you should remove it.
@item Make
@cindex Make
-Make is a program for building ``targets'' (e.g., files) using ``recipes''
-(a set of operations) when their known ``prerequisites'' (other files) have
-been updated. It elegantly allows you to define dependency structures for
-building your final output and updating it efficiently when the inputs
-change. It is the most common infra-structure to build software
-today.
-
-Scientific research methodology is very similar to software development:
-you start by testing a hypothesis on a small sample of objects/targets with
-a simple set of steps. As you are able to get promising results, you
-improve the method and use it on a larger, more general, sample. In the
-process, you will confront many issues that have to be corrected (bugs in
-software development jargon). Make a wonderful tool to manage this style of
-development. It has been used to make reproducible papers, for example see
-@url{https://gitlab.com/makhlaghi/NoiseChisel-paper, the reproduction
-pipeline} of the paper introducing @ref{NoiseChisel} (one of Gnuastro's
-programs).
+Make is a program for building ``targets'' (e.g., files) using ``recipes'' (a
set of operations) when their known ``prerequisites'' (other files) have been
updated.
+It elegantly allows you to define dependency structures for building your
final output and updating it efficiently when the inputs change.
+It is the most common infra-structure to build software today.
+
+Scientific research methodology is very similar to software development: you
start by testing a hypothesis on a small sample of objects/targets with a
simple set of steps.
+As you are able to get promising results, you improve the method and use it on
a larger, more general, sample.
+In the process, you will confront many issues that have to be corrected (bugs
in software development jargon).
+Make a wonderful tool to manage this style of development.
+It has been used to make reproducible papers, for example see
@url{https://gitlab.com/makhlaghi/NoiseChisel-paper, the reproduction pipeline}
of the paper introducing @ref{NoiseChisel} (one of Gnuastro's programs).
@cindex GNU Make
-GNU Make@footnote{@url{https://www.gnu.org/software/make/}} is the most
-common implementation which (similar to nearly all GNU programs, comes with
-a wonderful
-manual@footnote{@url{https://www.gnu.org/software/make/manual/}}). Make is
-very basic and simple, and thus the manual is short (the most important
-parts are in the first roughly 100 pages) and easy to read/understand.
+GNU Make@footnote{@url{https://www.gnu.org/software/make/}} is the most common
implementation which (similar to nearly all GNU programs, comes with a
wonderful manual@footnote{@url{https://www.gnu.org/software/make/manual/}}).
+Make is very basic and simple, and thus the manual is short (the most
important parts are in the first roughly 100 pages) and easy to read/understand.
-Make comes with a @option{--jobs} (@option{-j}) option which allows you to
-specify the maximum number of jobs that can be done simultaneously. For
-example if you have 8 threads available to your operating system. You can
-run:
+Make comes with a @option{--jobs} (@option{-j}) option which allows you to
specify the maximum number of jobs that can be done simultaneously.
+For example if you have 8 threads available to your operating system.
+You can run:
@example
$ make -j8
@end example
-With this command, Make will process your @file{Makefile} and create all
-the targets (can be thousands of FITS images for example) simultaneously on
-8 threads, while fully respecting their dependencies (only building a
-file/target when its prerequisites are successfully built). Make is thus
-strongly recommended for managing scientific research where robustness,
-archiving, reproducibility and speed@footnote{Besides its multi-threaded
-capabilities, Make will only re-build those targets that depend on a change
-you have made, not the whole work. For example, if you have set the
-prerequisites properly, you can easily test the changing of a parameter on
-your paper's results without having to re-do everything (which is much
-faster). This allows you to be much more productive in easily checking
-various ideas/assumptions of the different stages of your research and thus
-produce a more robust result for your exciting science.} are important.
+With this command, Make will process your @file{Makefile} and create all the
targets (can be thousands of FITS images for example) simultaneously on 8
threads, while fully respecting their dependencies (only building a file/target
when its prerequisites are successfully built).
+Make is thus strongly recommended for managing scientific research where
robustness, archiving, reproducibility and speed@footnote{Besides its
multi-threaded capabilities, Make will only re-build those targets that depend
on a change you have made, not the whole work.
+For example, if you have set the prerequisites properly, you can easily test
the changing of a parameter on your paper's results without having to re-do
everything (which is much faster).
+This allows you to be much more productive in easily checking various
ideas/assumptions of the different stages of your research and thus produce a
more robust result for your exciting science.} are important.
@end table
@@ -9244,60 +6907,48 @@ produce a more robust result for your exciting
science.} are important.
@cindex Bit
@cindex Type
-At the lowest level, the computer stores everything in terms of @code{1} or
-@code{0}. For example, each program in Gnuastro, or each astronomical image
-you take with the telescope is actually a string of millions of these zeros
-and ones. The space required to keep a zero or one is the smallest unit of
-storage, and is known as a @emph{bit}. However, understanding and
-manipulating this string of bits is extremely hard for most
-people. Therefore, we define packages of these bits along with a standard
-on how to interpret the bits in each package as a @emph{type}.
+At the lowest level, the computer stores everything in terms of @code{1} or
@code{0}.
+For example, each program in Gnuastro, or each astronomical image you take
with the telescope is actually a string of millions of these zeros and ones.
+The space required to keep a zero or one is the smallest unit of storage, and
is known as a @emph{bit}.
+However, understanding and manipulating this string of bits is extremely hard
for most people.
+Therefore, different standards are defined to package the bits into separate
@emph{type}s with a fixed interpretation of the bits in each package.
@cindex Byte
@cindex Signed integer
@cindex Unsigned integer
@cindex Integer, Signed
-The most basic standard for reading the bits is integer numbers
-(@mymath{..., -2, -1, 0, 1, 2, ...}, more bits will give larger
-limits). The common integer types are 8, 16, 32, and 64 bits wide. For each
-width, there are two standards for reading the bits: signed and unsigned
-integers. In the former, negative numbers are allowed and in the latter,
-they aren't. The @code{unsigned} types thus have larger positive limits
-(one extra bit), but no negative value. When the context of your work
-doesn't involve negative numbers (for example counting, where negative is
-not defined), it is best to use the @code{unsigned} types. For full
-numerical range of all integer types, see below.
-
-Another standard of converting a given number of bits to numbers is the
-floating point standard, this standard can approximately store any real
-number with a given precision. There are two common floating point types:
-32-bit and 64-bit, for single and double precision floating point numbers
-respectively. The former is sufficient for data with less than 8
-significant decimal digits (most astronomical data), while the latter is
-good for less than 16 significant decimal digits. The representation of
-real numbers as bits is much more complex than integers. If you are
-interested, you can start with the
-@url{https://en.wikipedia.org/wiki/Floating_point, Wikipedia article}.
-
-With the conversion operators in Gnuastro's Arithmetic, you can change the
-types of data to each other, which is necessary in some contexts. For
-example the program/library, that you intend to feed the data into, only
-accepts floating point values, but you have an integer image. Another
-situation that conversion can be helpful is when you know that your data
-only has values that fit within @code{int8} or @code{uint16}. However it is
-currently formatted in the @code{float64} type. Operations involving
-floating point or larger integer types are significantly slower than
-integer or smaller-width types respectively. In the latter case, it also
-requires much more (by 8 or 4 times in the example above) storage space. So
-when you confront such situations and want to store/archive/transfter the
-data, it is best convert them to the most efficient type.
-
-The short and long names for the recognized numeric data types in Gnuastro
-are listed below. Both short and long names can be used when you want to
-specify a type. For example, as a value to the common option
-@option{--type} (see @ref{Input output options}), or in the information
-comment lines of @ref{Gnuastro text table format}. The ranges listed below
-are inclusive.
+To store numbers, the most basic standard/type is for integers (@mymath{...,
-2, -1, 0, 1, 2, ...}).
+The common integer types are 8, 16, 32, and 64 bits wide (more bits will give
larger limits).
+Each bit corresponds to a power of 2 and they are summed to create the final
number.
+In the integer types, for each width there are two standards for reading the
bits: signed and unsigned.
+In the `signed' convention, one bit is reserved for the sign (stating that the
integer is positive or negative).
+The `unsigned' integers use that bit in the actual number and thus contain
only positive numbers (starting from zero).
+
+Therefore, at the same number of bits, both signed and unsigned integers can
allow the same number of integers, but the positive limit of the
@code{unsigned} types is double their @code{signed} counterparts with the same
width (at the expense of not having negative numbers).
+When the context of your work doesn't involve negative numbers (for example
counting, where negative is not defined), it is best to use the @code{unsigned}
types.
+For the full numerical range of all integer types, see below.
+
+Another standard of converting a given number of bits to numbers is the
floating point standard, this standard can @emph{approximately} store any real
number with a given precision.
+There are two common floating point types: 32-bit and 64-bit, for single and
double precision floating point numbers respectively.
+The former is sufficient for data with less than 8 significant decimal digits
(most astronomical data), while the latter is good for less than 16 significant
decimal digits.
+The representation of real numbers as bits is much more complex than integers.
+If you are interested to learn more about it, you can start with the
@url{https://en.wikipedia.org/wiki/Floating_point, Wikipedia article}.
+
+Practically, you can use Gnuastro's Arithmetic program to convert/change the
type of an image/datacube (see @ref{Arithmetic}), or Gnuastro Table program to
convert a table column's data type (see @ref{Column arithmetic}).
+Conversion of a dataset's type is necessary in some contexts.
+For example the program/library, that you intend to feed the data into, only
accepts floating point values, but you have an integer image/column.
+Another situation that conversion can be helpful is when you know that your
data only has values that fit within @code{int8} or @code{uint16}.
+However it is currently formatted in the @code{float64} type.
+
+The important thing to consider is that operations involving wider, floating
point, or signed types can be significantly slower than smaller-width, integer,
or unsigned types respectively.
+Note that besides speed, a wider type also requires much more storage space
(by 4 or 8 times).
+Therefore, when you confront such situations that can be optimized and want to
store/archive/transfer the data, it is best to use the most efficient type.
+For example if your dataset (image or table column) only has positive integers
less than 65535, store it as an unsigned 16-bit integer for faster processing,
faster transfer, and less storage space.
+
+The short and long names for the recognized numeric data types in Gnuastro are
listed below.
+Both short and long names can be used when you want to specify a type.
+For example, as a value to the common option @option{--type} (see @ref{Input
output options}), or in the information comment lines of @ref{Gnuastro text
table format}.
+The ranges listed below are inclusive.
@table @code
@item u8
@@ -9342,33 +6993,25 @@ are inclusive.
@item f32
@itemx float32
-32-bit (single-precision) floating point types. The maximum (minimum is its
-negative) possible value is
-@mymath{3.402823\times10^{38}}. Single-precision floating points can
-accurately represent a floating point number up to @mymath{\sim7.2}
-significant decimals. Given the heavy noise in astronomical data, this is
-usually more than sufficient for storing results.
+32-bit (single-precision) floating point types.
+The maximum (minimum is its negative) possible value is
@mymath{3.402823\times10^{38}}.
+Single-precision floating points can accurately represent a floating point
number up to @mymath{\sim7.2} significant decimals.
+Given the heavy noise in astronomical data, this is usually more than
sufficient for storing results.
@item f64
@itemx float64
-64-bit (double-precision) floating point types. The maximum (minimum is its
-negative) possible value is @mymath{\sim10^{308}}. Double-precision
-floating points can accurately represent a floating point number
-@mymath{\sim15.9} significant decimals. This is usually good for processing
-(mixing) the data internally, for example a sum of single precision data
-(and later storing the result as @code{float32}).
+64-bit (double-precision) floating point types.
+The maximum (minimum is its negative) possible value is @mymath{\sim10^{308}}.
+Double-precision floating points can accurately represent a floating point
number @mymath{\sim15.9} significant decimals.
+This is usually good for processing (mixing) the data internally, for example
a sum of single precision data (and later storing the result as @code{float32}).
@end table
@cartouche
@noindent
-@strong{Some file formats don't recognize all types.} Some file formats
-don't recognize all the types, for example the FITS standard (see
-@ref{Fits}) does not define @code{uint64} in binary tables or images. When
-a type is not acceptable for output into a given file format, the
-respective Gnuastro program or library will let you know and abort. On the
-command-line, you can use the @ref{Arithmetic} program to convert the
-numerical type of a dataset, in the libraries, you can call
-@code{gal_data_copy_to_new_type}.
+@strong{Some file formats don't recognize all types.} For example the FITS
standard (see @ref{Fits}) does not define @code{uint64} in binary tables or
images.
+When a type is not acceptable for output into a given file format, the
respective Gnuastro program or library will let you know and abort.
+On the command-line, you can convert the numerical type of an image, or table
column into another type with @ref{Arithmetic} or @ref{Table} respectively.
+If you are writing your own program, you can use the
@code{gal_data_copy_to_new_type()} function in Gnuastro's library, see
@ref{Copying datasets}.
@end cartouche
@@ -9376,51 +7019,30 @@ numerical type of a dataset, in the libraries, you can
call
@node Tables, Tessellation, Numeric data types, Common program behavior
@section Tables
-``A table is a collection of related data held in a structured format
-within a database. It consists of columns, and rows.'' (from
-Wikipedia). Each column in the table contains the values of one property
-and each row is a collection of properties (columns) for one target
-object. For example, let's assume you have just ran MakeCatalog (see
-@ref{MakeCatalog}) on an image to measure some properties for the labeled
-regions (which might be detected galaxies for example) in the image. For
-each labeled region (detected galaxy), there will be a @emph{row} which
-groups its measured properties as @emph{columns}, one column for each
-property. One such property can be the object's magnitude, which is the sum
-of pixels with that label, or its center can be defined as the
-light-weighted average value of those pixels. Many such properties can be
-derived from the raw pixel values and their position, see @ref{Invoking
-astmkcatalog} for a long list.
-
-As a summary, for each labeled region (or, galaxy) we have one @emph{row}
-and for each measured property we have one @emph{column}. This high-level
-structure is usually the first step for higher-level analysis, for example
-finding the stellar mass or photometric redshift from magnitudes in
-multiple colors. Thus, tables are not just outputs of programs, in fact it
-is much more common for tables to be inputs of programs. For example, to
-make a mock galaxy image, you need to feed in the properties of each galaxy
-into @ref{MakeProfiles} for it do the inverse of the process above and make
-a simulated image from a catalog, see @ref{Sufi simulates a detection}. In
-other cases, you can feed a table into @ref{Crop} and it will crop out
-regions centered on the positions within the table, see @ref{Finding
-reddest clumps and visual inspection}. So to end this relatively long
-introduction, tables play a very important role in astronomy, or generally
-all branches of data analysis.
-
-In @ref{Recognized table formats} the currently recognized table formats in
-Gnuastro are discussed. You can use any of these tables as input or ask for
-them to be built as output. The most common type of table format is a
-simple plain text file with each row on one line and columns separated by
-white space characters, this format is easy to read/write by eye/hand. To
-give it the full functionality of more specific table types like the FITS
-tables, Gnuastro has a special convention which you can use to give each
-column a name, type, unit, and comments, while still being readable by
-other plain text table readers. This convention is described in
-@ref{Gnuastro text table format}.
-
-When tables are input to a program, the program reading it needs to know
-which column(s) it should use for its desired purposes. Gnuastro's programs
-all follow a similar convention, on the way you can select columns in a
-table. They are thoroughly discussed in @ref{Selecting table columns}.
+``A table is a collection of related data held in a structured format within a
database.
+It consists of columns, and rows.'' (from Wikipedia).
+Each column in the table contains the values of one property and each row is a
collection of properties (columns) for one target object.
+For example, let's assume you have just ran MakeCatalog (see
@ref{MakeCatalog}) on an image to measure some properties for the labeled
regions (which might be detected galaxies for example) in the image.
+For each labeled region (detected galaxy), there will be a @emph{row} which
groups its measured properties as @emph{columns}, one column for each property.
+One such property can be the object's magnitude, which is the sum of pixels
with that label, or its center can be defined as the light-weighted average
value of those pixels.
+Many such properties can be derived from the raw pixel values and their
position, see @ref{Invoking astmkcatalog} for a long list.
+
+As a summary, for each labeled region (or, galaxy) we have one @emph{row} and
for each measured property we have one @emph{column}.
+This high-level structure is usually the first step for higher-level analysis,
for example finding the stellar mass or photometric redshift from magnitudes in
multiple colors.
+Thus, tables are not just outputs of programs, in fact it is much more common
for tables to be inputs of programs.
+For example, to make a mock galaxy image, you need to feed in the properties
of each galaxy into @ref{MakeProfiles} for it do the inverse of the process
above and make a simulated image from a catalog, see @ref{Sufi simulates a
detection}.
+In other cases, you can feed a table into @ref{Crop} and it will crop out
regions centered on the positions within the table, see @ref{Finding reddest
clumps and visual inspection}.
+So to end this relatively long introduction, tables play a very important role
in astronomy, or generally all branches of data analysis.
+
+In @ref{Recognized table formats} the currently recognized table formats in
Gnuastro are discussed.
+You can use any of these tables as input or ask for them to be built as output.
+The most common type of table format is a simple plain text file with each row
on one line and columns separated by white space characters, this format is
easy to read/write by eye/hand.
+To give it the full functionality of more specific table types like the FITS
tables, Gnuastro has a special convention which you can use to give each column
a name, type, unit, and comments, while still being readable by other plain
text table readers.
+This convention is described in @ref{Gnuastro text table format}.
+
+When tables are input to a program, the program reading it needs to know which
column(s) it should use for its desired purposes.
+Gnuastro's programs all follow a similar convention, on the way you can select
columns in a table.
+They are thoroughly discussed in @ref{Selecting table columns}.
@menu
@@ -9432,92 +7054,57 @@ table. They are thoroughly discussed in @ref{Selecting
table columns}.
@node Recognized table formats, Gnuastro text table format, Tables, Tables
@subsection Recognized table formats
-The list of table formats that Gnuastro can currently read from and write
-to are described below. Each has their own advantage and disadvantages, so a
-short review of the format is also provided to help you make the best
-choice based on how you want to define your input tables or later use your
-output tables.
+The list of table formats that Gnuastro can currently read from and write to
are described below.
+Each has their own advantage and disadvantages, so a short review of the
format is also provided to help you make the best choice based on how you want
to define your input tables or later use your output tables.
@table @asis
@item Plain text table
-This is the most basic and simplest way to create, view, or edit the table
-by hand on a text editor. The other formats described below are less
-eye-friendly and have a more formal structure (for easier computer
-readability). It is fully described in @ref{Gnuastro text table format}.
+This is the most basic and simplest way to create, view, or edit the table by
hand on a text editor.
+The other formats described below are less eye-friendly and have a more formal
structure (for easier computer readability).
+It is fully described in @ref{Gnuastro text table format}.
@cindex FITS Tables
@cindex Tables FITS
@cindex ASCII table, FITS
@item FITS ASCII tables
-The FITS ASCII table extension is fully in ASCII encoding and thus easily
-readable on any text editor (assuming it is the only extension in the FITS
-file). If the FITS file also contains binary extensions (for example an
-image or binary table extensions), then there will be many hard to print
-characters. The FITS ASCII format doesn't have new line characters to
-separate rows. In the FITS ASCII table standard, each row is defined as a
-fixed number of characters (value to the @code{NAXIS1} keyword), so to
-visually inspect it properly, you would have to adjust your text editor's
-width to this value. All columns start at given character positions and
-have a fixed width (number of characters).
-
-Numbers in a FITS ASCII table are printed into ASCII format, they are not
-in binary (that the CPU uses). Hence, they can take a larger space in
-memory, loose their precision, and take longer to read into memory. If you
-are dealing with integer type columns (see @ref{Numeric data types}),
-another issue with FITS ASCII tables is that the type information for the
-column will be lost (there is only one integer type in FITS ASCII
-tables). One problem with the binary format on the other hand is that it
-isn't portable (different CPUs/compilers) have different standards for
-translating the zeros and ones. But since ASCII characters are defined on a
-byte and are well recognized, they are better for portability on those
-various systems. Gnuastro's plain text table format described below is much
-more portable and easier to read/write/interpret by humans manually.
-
-Generally, as the name implies, this format is useful for when your table
-mainly contains ASCII columns (for example file names, or
-descriptions). They can be useful when you need to include columns with
-structured ASCII information along with other extensions in one FITS
-file. In such cases, you can also consider header keywords (see
-@ref{Fits}).
+The FITS ASCII table extension is fully in ASCII encoding and thus easily
readable on any text editor (assuming it is the only extension in the FITS
file).
+If the FITS file also contains binary extensions (for example an image or
binary table extensions), then there will be many hard to print characters.
+The FITS ASCII format doesn't have new line characters to separate rows.
+In the FITS ASCII table standard, each row is defined as a fixed number of
characters (value to the @code{NAXIS1} keyword), so to visually inspect it
properly, you would have to adjust your text editor's width to this value.
+All columns start at given character positions and have a fixed width (number
of characters).
+
+Numbers in a FITS ASCII table are printed into ASCII format, they are not in
binary (that the CPU uses).
+Hence, they can take a larger space in memory, loose their precision, and take
longer to read into memory.
+If you are dealing with integer type columns (see @ref{Numeric data types}),
another issue with FITS ASCII tables is that the type information for the
column will be lost (there is only one integer type in FITS ASCII tables).
+One problem with the binary format on the other hand is that it isn't portable
(different CPUs/compilers) have different standards for translating the zeros
and ones.
+But since ASCII characters are defined on a byte and are well recognized, they
are better for portability on those various systems.
+Gnuastro's plain text table format described below is much more portable and
easier to read/write/interpret by humans manually.
+
+Generally, as the name implies, this format is useful for when your table
mainly contains ASCII columns (for example file names, or descriptions).
+They can be useful when you need to include columns with structured ASCII
information along with other extensions in one FITS file.
+In such cases, you can also consider header keywords (see @ref{Fits}).
@cindex Binary table, FITS
@item FITS binary tables
-The FITS binary table is the FITS standard's solution to the issues
-discussed with keeping numbers in ASCII format as described under the FITS
-ASCII table title above. Only columns defined as a string type (a string of
-ASCII characters) are readable in a text editor. The portability problem
-with binary formats discussed above is mostly solved thanks to the
-portability of CFITSIO (see @ref{CFITSIO}) and the very long history of the
-FITS format which has been widely used since the 1970s.
-
-In the case of most numbers, storing them in binary format is more memory
-efficient than ASCII format. For example, to store @code{-25.72034} in
-ASCII format, you need 9 bytes/characters. But if you keep this same number
-(to the approximate precision possible) as a 4-byte (32-bit) floating point
-number, you can keep/transmit it with less than half the amount of
-memory. When catalogs contain thousands/millions of rows in tens/hundreds
-of columns, this can lead to significant improvements in memory/band-width
-usage. Moreover, since the CPU does its operations in the binary formats,
-reading the table in and writing it out is also much faster than an ASCII
-table.
+The FITS binary table is the FITS standard's solution to the issues discussed
with keeping numbers in ASCII format as described under the FITS ASCII table
title above.
+Only columns defined as a string type (a string of ASCII characters) are
readable in a text editor.
+The portability problem with binary formats discussed above is mostly solved
thanks to the portability of CFITSIO (see @ref{CFITSIO}) and the very long
history of the FITS format which has been widely used since the 1970s.
+
+In the case of most numbers, storing them in binary format is more memory
efficient than ASCII format.
+For example, to store @code{-25.72034} in ASCII format, you need 9
bytes/characters.
+But if you keep this same number (to the approximate precision possible) as a
4-byte (32-bit) floating point number, you can keep/transmit it with less than
half the amount of memory.
+When catalogs contain thousands/millions of rows in tens/hundreds of columns,
this can lead to significant improvements in memory/band-width usage.
+Moreover, since the CPU does its operations in the binary formats, reading the
table in and writing it out is also much faster than an ASCII table.
+
+When you are dealing with integer numbers, the compression ratio can be even
better, for example if you know all of the values in a column are positive and
less than @code{255}, you can use the @code{unsigned char} type which only
takes one byte! If they are between @code{-128} and @code{127}, then you can
use the (signed) @code{char} type.
+So if you are thoughtful about the limits of your integer columns, you can
greatly reduce the size of your file and also the speed at which it is
read/written.
+This can be very useful when sharing your results with collaborators or
publishing them.
+To decrease the file size even more you can name your output as ending in
@file{.fits.gz} so it is also compressed after creation.
+Just note that compression/decompressing is CPU intensive and can slow down
the writing/reading of the file.
-When you are dealing with integer numbers, the compression ratio can be
-even better, for example if you know all of the values in a column are
-positive and less than @code{255}, you can use the @code{unsigned char}
-type which only takes one byte! If they are between @code{-128} and
-@code{127}, then you can use the (signed) @code{char} type. So if you are
-thoughtful about the limits of your integer columns, you can greatly reduce
-the size of your file and also the speed at which it is read/written. This
-can be very useful when sharing your results with collaborators or
-publishing them. To decrease the file size even more you can name your
-output as ending in @file{.fits.gz} so it is also compressed after
-creation. Just note that compression/decompressing is CPU intensive and can
-slow down the writing/reading of the file.
-
-Fortunately the FITS Binary table format also accepts ASCII strings as
-column types (along with the various numerical types). So your dataset can
-also contain non-numerical columns.
+Fortunately the FITS Binary table format also accepts ASCII strings as column
types (along with the various numerical types).
+So your dataset can also contain non-numerical columns.
@end table
@@ -9528,70 +7115,45 @@ also contain non-numerical columns.
@node Gnuastro text table format, Selecting table columns, Recognized table
formats, Tables
@subsection Gnuastro text table format
-Plain text files are the most generic, portable, and easiest way to
-(manually) create, (visually) inspect, or (manually) edit a table. In this
-format, the ending of a row is defined by the new-line character (a line on
-a text editor). So when you view it on a text editor, every row will occupy
-one line. The delimiters (or characters separating the columns) are white
-space characters (space, horizontal tab, vertical tab) and a comma
-(@key{,}). The only further requirement is that all rows/lines must have
-the same number of columns.
-
-The columns don't have to be exactly under each other and the rows can be
-arbitrarily long with different lengths. For example the following contents
-in a file would be interpreted as a table with 4 columns and 2 rows, with
-each element interpreted as a @code{double} type (see @ref{Numeric data
-types}).
+Plain text files are the most generic, portable, and easiest way to (manually)
create, (visually) inspect, or (manually) edit a table.
+In this format, the ending of a row is defined by the new-line character (a
line on a text editor).
+So when you view it on a text editor, every row will occupy one line.
+The delimiters (or characters separating the columns) are white space
characters (space, horizontal tab, vertical tab) and a comma (@key{,}).
+The only further requirement is that all rows/lines must have the same number
of columns.
+
+The columns don't have to be exactly under each other and the rows can be
arbitrarily long with different lengths.
+For example the following contents in a file would be interpreted as a table
with 4 columns and 2 rows, with each element interpreted as a @code{double}
type (see @ref{Numeric data types}).
@example
1 2.234948 128 39.8923e8
2 , 4.454 792 72.98348e7
@end example
-However, the example above has no other information about the columns (it
-is just raw data, with no meta-data). To use this table, you have to
-remember what the numbers in each column represent. Also, when you want to
-select columns, you have to count their position within the table. This can
-become frustrating and prone to bad errors (getting the columns wrong)
-especially as the number of columns increase. It is also bad for sending to
-a colleague, because they will find it hard to remember/use the columns
-properly.
-
-To solve these problems in Gnuastro's programs/libraries you aren't limited
-to using the column's number, see @ref{Selecting table columns}. If the
-columns have names, units, or comments you can also select your columns
-based on searches/matches in these fields, for example see @ref{Table}.
-Also, in this manner, you can't guide the program reading the table on how
-to read the numbers. As an example, the first and third columns above can
-be read as integer types: the first column might be an ID and the third can
-be the number of pixels an object occupies in an image. So there is no need
-to read these to columns as a @code{double} type (which takes more memory,
-and is slower).
-
-In the bare-minimum example above, you also can't use strings of
-characters, for example the names of filters, or some other identifier that
-includes non-numerical characters. In the absence of any information, only
-numbers can be read robustly. Assuming we read columns with non-numerical
-characters as string, there would still be the problem that the strings
-might contain space (or any delimiter) character for some rows. So, each
-`word' in the string will be interpreted as a column and the program will
-abort with an error that the rows don't have the same number of columns.
-
-To correct for these limitations, Gnuastro defines the following convention
-for storing the table meta-data along with the raw data in one plain text
-file. The format is primarily designed for ease of reading/writing by
-eye/fingers, but is also structured enough to be read by a program.
-
-When the first non-white character in a line is @key{#}, or there are no
-non-white characters in it, then the line will not be considered as a row
-of data in the table (this is a pretty standard convention in many
-programs, and higher level languages). In the former case, the line is
-interpreted as a @emph{comment}. If the comment line starts with `@code{#
-Column N:}', then it is assumed to contain information about column
-@code{N} (a number, counting from 1). Comment lines that don't start with
-this pattern are ignored and you can use them to include any further
-information you want to store with the table in the text file. A column
-information comment is assumed to have the following format:
+However, the example above has no other information about the columns (it is
just raw data, with no meta-data).
+To use this table, you have to remember what the numbers in each column
represent.
+Also, when you want to select columns, you have to count their position within
the table.
+This can become frustrating and prone to bad errors (getting the columns
wrong) especially as the number of columns increase.
+It is also bad for sending to a colleague, because they will find it hard to
remember/use the columns properly.
+
+To solve these problems in Gnuastro's programs/libraries you aren't limited to
using the column's number, see @ref{Selecting table columns}.
+If the columns have names, units, or comments you can also select your columns
based on searches/matches in these fields, for example see @ref{Table}.
+Also, in this manner, you can't guide the program reading the table on how to
read the numbers.
+As an example, the first and third columns above can be read as integer types:
the first column might be an ID and the third can be the number of pixels an
object occupies in an image.
+So there is no need to read these to columns as a @code{double} type (which
takes more memory, and is slower).
+
+In the bare-minimum example above, you also can't use strings of characters,
for example the names of filters, or some other identifier that includes
non-numerical characters.
+In the absence of any information, only numbers can be read robustly.
+Assuming we read columns with non-numerical characters as string, there would
still be the problem that the strings might contain space (or any delimiter)
character for some rows.
+So, each `word' in the string will be interpreted as a column and the program
will abort with an error that the rows don't have the same number of columns.
+
+To correct for these limitations, Gnuastro defines the following convention
for storing the table meta-data along with the raw data in one plain text file.
+The format is primarily designed for ease of reading/writing by eye/fingers,
but is also structured enough to be read by a program.
+
+When the first non-white character in a line is @key{#}, or there are no
non-white characters in it, then the line will not be considered as a row of
data in the table (this is a pretty standard convention in many programs, and
higher level languages).
+In the former case, the line is interpreted as a @emph{comment}.
+If the comment line starts with `@code{# Column N:}', then it is assumed to
contain information about column @code{N} (a number, counting from 1).
+Comment lines that don't start with this pattern are ignored and you can use
them to include any further information you want to store with the table in the
text file.
+A column information comment is assumed to have the following format:
@example
# Column N: NAME [UNIT, TYPE, BLANK] COMMENT
@@ -9599,50 +7161,32 @@ information comment is assumed to have the following
format:
@cindex NaN
@noindent
-Any sequence of characters between `@key{:}' and `@key{[}' will be
-interpreted as the column name (so it can contain anything except the
-`@key{[}' character). Anything between the `@key{]}' and the end of the
-line is defined as a comment. Within the brackets, anything before the
-first `@key{,}' is the units (physical units, for example km/s, or erg/s),
-anything before the second `@key{,}' is the short type identifier (see
-below, and @ref{Numeric data types}). Finally (still within the brackets),
-any non-white characters after the second `@key{,}' are interpreted as the
-blank value for that column (see @ref{Blank pixels}). Note that blank
-values will be stored in the same type as the column, not as a
-string@footnote{For floating point types, the @code{nan}, or @code{inf}
-strings (both not case-sensitive) refer to IEEE NaN (not a number) and
-infinity values respectively and will be stored as a floating point, so
-they are acceptable.}.
-
-When a formatting problem occurs (for example you have specified the wrong
-type code, see below), or the the column was already given meta-data in a
-previous comment, or the column number is larger than the actual number of
-columns in the table (the non-commented or empty lines), then the comment
-information line will be ignored.
-
-When a comment information line can be used, the leading and trailing white
-space characters will be stripped from all of the elements. For example in
-this line:
+Any sequence of characters between `@key{:}' and `@key{[}' will be interpreted
as the column name (so it can contain anything except the `@key{[}' character).
+Anything between the `@key{]}' and the end of the line is defined as a comment.
+Within the brackets, anything before the first `@key{,}' is the units
(physical units, for example km/s, or erg/s), anything before the second
`@key{,}' is the short type identifier (see below, and @ref{Numeric data
types}).
+Finally (still within the brackets), any non-white characters after the second
`@key{,}' are interpreted as the blank value for that column (see @ref{Blank
pixels}).
+Note that blank values will be stored in the same type as the column, not as a
string@footnote{For floating point types, the @code{nan}, or @code{inf} strings
(both not case-sensitive) refer to IEEE NaN (not a number) and infinity values
respectively and will be stored as a floating point, so they are acceptable.}.
+
+When a formatting problem occurs (for example you have specified the wrong
type code, see below), or the column was already given meta-data in a previous
comment, or the column number is larger than the actual number of columns in
the table (the non-commented or empty lines), then the comment information line
will be ignored.
+
+When a comment information line can be used, the leading and trailing white
space characters will be stripped from all of the elements.
+For example in this line:
@example
# Column 5: column name [km/s, f32,-99] Redshift as speed
@end example
-The @code{NAME} field will be `@code{column name}' and the @code{TYPE}
-field will be `@code{f32}'. Note how all the white space characters before
-and after strings are not used, but those in the middle remained. Also,
-white space characters aren't mandatory. Hence, in the example above, the
-@code{BLANK} field will be given the value of `@code{-99}'.
+The @code{NAME} field will be `@code{column name}' and the @code{TYPE} field
will be `@code{f32}'.
+Note how all the white space characters before and after strings are not used,
but those in the middle remained.
+Also, white space characters aren't mandatory.
+Hence, in the example above, the @code{BLANK} field will be given the value of
`@code{-99}'.
-Except for the column number (@code{N}), the rest of the fields are
-optional. Also, the column information comments don't have to be in
-order. In other words, the information for column @mymath{N+m}
-(@mymath{m>0}) can be given in a line before column @mymath{N}. Also, you
-don't have to specify information for all columns. Those columns that don't
-have this information will be interpreted with the default settings (like
-the case above: values are double precision floating point, and the column
-has no name, unit, or comment). So these lines are all acceptable for any
-table (the first one, with nothing but the column number is redundant):
+Except for the column number (@code{N}), the rest of the fields are optional.
+Also, the column information comments don't have to be in order.
+In other words, the information for column @mymath{N+m} (@mymath{m>0}) can be
given in a line before column @mymath{N}.
+Also, you don't have to specify information for all columns.
+Those columns that don't have this information will be interpreted with the
default settings (like the case above: values are double precision floating
point, and the column has no name, unit, or comment).
+So these lines are all acceptable for any table (the first one, with nothing
but the column number is redundant):
@example
# Column 5:
@@ -9651,39 +7195,27 @@ table (the first one, with nothing but the column
number is redundant):
@end example
@noindent
-The data type of the column should be specified with one of the following
-values:
+The data type of the column should be specified with one of the following
values:
@itemize
@item
For a numeric column, you can use any of the numeric types (and their
recognized identifiers) described in @ref{Numeric data types}.
@item
-`@code{strN}': for strings. The @code{N} value identifies the length of the
-string (how many characters it has). The start of the string on each row is
-the first non-delimiter character of the column that has the string
-type. The next @code{N} characters will be interpreted as a string and all
-leading and trailing white space will be removed.
-
-If the next column's characters, are closer than @code{N} characters to the
-start of the string column in that line/row, they will be considered part
-of the string column. If there is a new-line character before the ending of
-the space given to the string column (in other words, the string column is
-the last column), then reading of the string will stop, even if the
-@code{N} characters are not complete yet. See @file{tests/table/table.txt}
-for one example. Therefore, the only time you have to pay attention to the
-positioning and spaces given to the string column is when it is not the
-last column in the table.
-
-The only limitation in this format is that trailing and leading white space
-characters will be removed from the columns that are read. In most cases,
-this is the desired behavior, but if trailing and leading white-spaces are
-critically important to your analysis, define your own starting and ending
-characters and remove them after the table has been read. For example in
-the sample table below, the two `@key{|}' characters (which are arbitrary)
-will remain in the value of the second column and you can remove them
-manually later. If only one of the leading or trailing white spaces is
-important for your work, you can only use one of the `@key{|}'s.
+`@code{strN}': for strings.
+The @code{N} value identifies the length of the string (how many characters it
has).
+The start of the string on each row is the first non-delimiter character of
the column that has the string type.
+The next @code{N} characters will be interpreted as a string and all leading
and trailing white space will be removed.
+
+If the next column's characters, are closer than @code{N} characters to the
start of the string column in that line/row, they will be considered part of
the string column.
+If there is a new-line character before the ending of the space given to the
string column (in other words, the string column is the last column), then
reading of the string will stop, even if the @code{N} characters are not
complete yet.
+See @file{tests/table/table.txt} for one example.
+Therefore, the only time you have to pay attention to the positioning and
spaces given to the string column is when it is not the last column in the
table.
+
+The only limitation in this format is that trailing and leading white space
characters will be removed from the columns that are read.
+In most cases, this is the desired behavior, but if trailing and leading
white-spaces are critically important to your analysis, define your own
starting and ending characters and remove them after the table has been read.
+For example in the sample table below, the two `@key{|}' characters (which are
arbitrary) will remain in the value of the second column and you can remove
them manually later.
+If only one of the leading or trailing white spaces is important for your
work, you can only use one of the `@key{|}'s.
@example
# Column 1: ID [label, u8]
@@ -9694,88 +7226,54 @@ important for your work, you can only use one of the
`@key{|}'s.
@end itemize
-Note that the FITS binary table standard does not define the @code{unsigned
-int} and @code{unsigned long} types, so if you want to convert your tables
-to FITS binary tables, use other types. Also, note that in the FITS ASCII
-table, there is only one integer type (@code{long}). So if you convert a
-Gnuastro plain text table to a FITS ASCII table with the @ref{Table}
-program, the type information for integers will be lost. Conversely if
-integer types are important for you, you have to manually set them when
-reading a FITS ASCII table (for example with the Table program when
-reading/converting into a file, or with the @file{gnuastro/table.h} library
-functions when reading into memory).
+Note that the FITS binary table standard does not define the @code{unsigned
int} and @code{unsigned long} types, so if you want to convert your tables to
FITS binary tables, use other types.
+Also, note that in the FITS ASCII table, there is only one integer type
(@code{long}).
+So if you convert a Gnuastro plain text table to a FITS ASCII table with the
@ref{Table} program, the type information for integers will be lost.
+Conversely if integer types are important for you, you have to manually set
them when reading a FITS ASCII table (for example with the Table program when
reading/converting into a file, or with the @file{gnuastro/table.h} library
functions when reading into memory).
@node Selecting table columns, , Gnuastro text table format, Tables
@subsection Selecting table columns
-At the lowest level, the only defining aspect of a column in a table is its
-number, or position. But selecting columns purely by number is not very
-convenient and, especially when the tables are large it can be very
-frustrating and prone to errors. Hence, table file formats (for example see
-@ref{Recognized table formats}) have ways to store additional information
-about the columns (meta-data). Some of the most common pieces of
-information about each column are its @emph{name}, the @emph{units} of data
-in the it, and a @emph{comment} for longer/informal description of the
-column's data.
+At the lowest level, the only defining aspect of a column in a table is its
number, or position.
+But selecting columns purely by number is not very convenient and, especially
when the tables are large it can be very frustrating and prone to errors.
+Hence, table file formats (for example see @ref{Recognized table formats})
have ways to store additional information about the columns (meta-data).
+Some of the most common pieces of information about each column are its
@emph{name}, the @emph{units} of data in the it, and a @emph{comment} for
longer/informal description of the column's data.
-To facilitate research with Gnuastro, you can select columns by matching,
-or searching in these three fields, besides the low-level column number. To
-view the full list of information on the columns in the table, you can use
-the Table program (see @ref{Table}) with the command below (replace
-@file{table-file} with the filename of your table, if its FITS, you might
-also need to specify the HDU/extension which contains the table):
+To facilitate research with Gnuastro, you can select columns by matching, or
searching in these three fields, besides the low-level column number.
+To view the full list of information on the columns in the table, you can use
the Table program (see @ref{Table}) with the command below (replace
@file{table-file} with the filename of your table, if its FITS, you might also
need to specify the HDU/extension which contains the table):
@example
$ asttable --information table-file
@end example
-Gnuastro's programs need the columns for different purposes, for example in
-Crop, you specify the columns containing the central coordinates of the
-crop centers with the @option{--coordcol} option (see @ref{Crop
-options}). On the other hand, in MakeProfiles, to specify the column
-containing the profile position angles, you must use the @option{--pcol}
-option (see @ref{MakeProfiles catalog}). Thus, there can be no unified
-common option name to select columns for all programs (different columns
-have different purposes). However, when the program expects a column for a
-specific context, the option names end in the @option{col} suffix like the
-examples above. These options accept values in integer (column number), or
-string (metadata match/search) format.
-
-If the value can be parsed as a positive integer, it will be seen as the
-low-level column number. Note that column counting starts from 1, so if you
-ask for column 0, the respective program will abort with an error. When the
-value can't be interpreted as an a integer number, it will be seen as a
-string of characters which will be used to match/search in the table's
-meta-data. The meta-data field which the value will be compared with can be
-selected through the @option{--searchin} option, see @ref{Input output
-options}. @option{--searchin} can take three values: @code{name},
-@code{unit}, @code{comment}. The matching will be done following this
-convention:
+Gnuastro's programs need the columns for different purposes, for example in
Crop, you specify the columns containing the central coordinates of the crop
centers with the @option{--coordcol} option (see @ref{Crop options}).
+On the other hand, in MakeProfiles, to specify the column containing the
profile position angles, you must use the @option{--pcol} option (see
@ref{MakeProfiles catalog}).
+Thus, there can be no unified common option name to select columns for all
programs (different columns have different purposes).
+However, when the program expects a column for a specific context, the option
names end in the @option{col} suffix like the examples above.
+These options accept values in integer (column number), or string (metadata
match/search) format.
+
+If the value can be parsed as a positive integer, it will be seen as the
low-level column number.
+Note that column counting starts from 1, so if you ask for column 0, the
respective program will abort with an error.
+When the value can't be interpreted as an a integer number, it will be seen as
a string of characters which will be used to match/search in the table's
meta-data.
+The meta-data field which the value will be compared with can be selected
through the @option{--searchin} option, see @ref{Input output options}.
+@option{--searchin} can take three values: @code{name}, @code{unit},
@code{comment}.
+The matching will be done following this convention:
@itemize
@item
-If the value is enclosed in two slashes (for example @command{-x/RA_/}, or
-@option{--coordcol=/RA_/}, see @ref{Crop options}), then it is assumed to
-be a regular expression with the same convention as GNU AWK. GNU AWK has a
-very well written
-@url{https://www.gnu.org/software/gawk/manual/html_node/Regexp.html,
-chapter} describing regular expressions, so we we will not continue
-discussing them here. Regular expressions are a very powerful tool in
-matching text and useful in many contexts. We thus strongly encourage
-reviewing this chapter for greatly improving the quality of your work in
-many cases, not just for searching column meta-data in Gnuastro.
+If the value is enclosed in two slashes (for example @command{-x/RA_/}, or
@option{--coordcol=/RA_/}, see @ref{Crop options}), then it is assumed to be a
regular expression with the same convention as GNU AWK.
+GNU AWK has a very well written
@url{https://www.gnu.org/software/gawk/manual/html_node/Regexp.html, chapter}
describing regular expressions, so we we will not continue discussing them here.
+Regular expressions are a very powerful tool in matching text and useful in
many contexts.
+We thus strongly encourage reviewing this chapter for greatly improving the
quality of your work in many cases, not just for searching column meta-data in
Gnuastro.
@item
-When the string isn't enclosed between `@key{/}'s, any column that exactly
-matches the given value in the given field will be selected.
+When the string isn't enclosed between `@key{/}'s, any column that exactly
matches the given value in the given field will be selected.
@end itemize
-Note that in both cases, you can ignore the case of alphabetic characters
-with the @option{--ignorecase} option, see @ref{Input output options}. Also, in
-both cases, multiple columns may be selected with one call to this
-function. In this case, the order of the selected columns (with one call)
-will be the same order as they appear in the table.
+Note that in both cases, you can ignore the case of alphabetic characters with
the @option{--ignorecase} option, see @ref{Input output options}.
+Also, in both cases, multiple columns may be selected with one call to this
function.
+In this case, the order of the selected columns (with one call) will be the
same order as they appear in the table.
@@ -9784,53 +7282,30 @@ will be the same order as they appear in the table.
@node Tessellation, Automatic output, Tables, Common program behavior
@section Tessellation
-It is sometimes necessary to classify the elements in a dataset (for
-example pixels in an image) into a grid of individual, non-overlapping
-tiles. For example when background sky gradients are present in an image,
-you can define a tile grid over the image. When the tile sizes are set
-properly, the background's variation over each tile will be negligible,
-allowing you to measure (and subtract) it. In other cases (for example
-spatial domain convolution in Gnuastro, see @ref{Convolve}), it might
-simply be for speed of processing: each tile can be processed independently
-on a separate CPU thread. In the arts and mathematics, this process is
-formally known as @url{https://en.wikipedia.org/wiki/Tessellation,
-tessellation}.
-
-The size of the regular tiles (in units of data-elements, or pixels in an
-image) can be defined with the @option{--tilesize} option. It takes
-multiple numbers (separated by a comma) which will be the length along the
-respective dimension (in FORTRAN/FITS dimension order). Divisions are also
-acceptable, but must result in an integer. For example
-@option{--tilesize=30,40} can be used for an image (a 2D dataset). The
-regular tile size along the first FITS axis (horizontal when viewed in SAO
-ds9) will be 30 pixels and along the second it will be 40 pixels. Ideally,
-@option{--tilesize} should be selected such that all tiles in the image
-have exactly the same size. In other words, that the dataset length in each
-dimension is divisible by the tile size in that dimension.
-
-However, this is not always possible: the dataset can be any size and every
-pixel in it is valuable. In such cases, Gnuastro will look at the
-significance of the remainder length, if it is not significant (for example
-one or two pixels), then it will just increase the size of the first tile
-in the respective dimension and allow the rest of the tiles to have the
-required size. When the remainder is significant (for example one pixel
-less than the size along that dimension), the remainder will be added to
-one regular tile's size and the large tile will be cut in half and put in
-the two ends of the grid/tessellation. In this way, all the tiles in the
-central regions of the dataset will have the regular tile sizes and the
-tiles on the edge will be slightly larger/smaller depending on the
-remainder significance. The fraction which defines the remainder
-significance along all dimensions can be set through
-@option{--remainderfrac}.
-
-The best tile size is directly related to the spatial properties of the
-property you want to study (for example, gradient on the image). In
-practice we assume that the gradient is not present over each tile. So if
-there is a strong gradient (for example in long wavelength ground based
-images) or the image is of a crowded area where there isn't too much blank
-area, you have to choose a smaller tile size. A larger mesh will give more
-pixels and and so the scatter in the results will be less (better
-statistics).
+It is sometimes necessary to classify the elements in a dataset (for example
pixels in an image) into a grid of individual, non-overlapping tiles.
+For example when background sky gradients are present in an image, you can
define a tile grid over the image.
+When the tile sizes are set properly, the background's variation over each
tile will be negligible, allowing you to measure (and subtract) it.
+In other cases (for example spatial domain convolution in Gnuastro, see
@ref{Convolve}), it might simply be for speed of processing: each tile can be
processed independently on a separate CPU thread.
+In the arts and mathematics, this process is formally known as
@url{https://en.wikipedia.org/wiki/Tessellation, tessellation}.
+
+The size of the regular tiles (in units of data-elements, or pixels in an
image) can be defined with the @option{--tilesize} option.
+It takes multiple numbers (separated by a comma) which will be the length
along the respective dimension (in FORTRAN/FITS dimension order).
+Divisions are also acceptable, but must result in an integer.
+For example @option{--tilesize=30,40} can be used for an image (a 2D dataset).
+The regular tile size along the first FITS axis (horizontal when viewed in SAO
ds9) will be 30 pixels and along the second it will be 40 pixels.
+Ideally, @option{--tilesize} should be selected such that all tiles in the
image have exactly the same size.
+In other words, that the dataset length in each dimension is divisible by the
tile size in that dimension.
+
+However, this is not always possible: the dataset can be any size and every
pixel in it is valuable.
+In such cases, Gnuastro will look at the significance of the remainder length,
if it is not significant (for example one or two pixels), then it will just
increase the size of the first tile in the respective dimension and allow the
rest of the tiles to have the required size.
+When the remainder is significant (for example one pixel less than the size
along that dimension), the remainder will be added to one regular tile's size
and the large tile will be cut in half and put in the two ends of the
grid/tessellation.
+In this way, all the tiles in the central regions of the dataset will have the
regular tile sizes and the tiles on the edge will be slightly larger/smaller
depending on the remainder significance.
+The fraction which defines the remainder significance along all dimensions can
be set through @option{--remainderfrac}.
+
+The best tile size is directly related to the spatial properties of the
property you want to study (for example, gradient on the image).
+In practice we assume that the gradient is not present over each tile.
+So if there is a strong gradient (for example in long wavelength ground based
images) or the image is of a crowded area where there isn't too much blank
area, you have to choose a smaller tile size.
+A larger mesh will give more pixels and and so the scatter in the results will
be less (better statistics).
@cindex CCD
@cindex Amplifier
@@ -9838,47 +7313,31 @@ statistics).
@cindex Subaru Telescope
@cindex Hyper Suprime-Cam
@cindex Hubble Space Telescope (HST)
-For raw image processing, a single tessellation/grid is not sufficient. Raw
-images are the unprocessed outputs of the camera detectors. Modern
-detectors usually have multiple readout channels each with its own
-amplifier. For example the Hubble Space Telescope Advanced Camera for
-Surveys (ACS) has four amplifiers over its full detector area dividing the
-square field of view to four smaller squares. Ground based image detectors
-are not exempt, for example each CCD of Subaru Telescope's Hyper
-Suprime-Cam camera (which has 104 CCDs) has four amplifiers, but they have
-the same height of the CCD and divide the width by four parts.
+For raw image processing, a single tessellation/grid is not sufficient.
+Raw images are the unprocessed outputs of the camera detectors.
+Modern detectors usually have multiple readout channels each with its own
amplifier.
+For example the Hubble Space Telescope Advanced Camera for Surveys (ACS) has
four amplifiers over its full detector area dividing the square field of view
to four smaller squares.
+Ground based image detectors are not exempt, for example each CCD of Subaru
Telescope's Hyper Suprime-Cam camera (which has 104 CCDs) has four amplifiers,
but they have the same height of the CCD and divide the width by four parts.
@cindex Channel
-The bias current on each amplifier is different, and initial bias
-subtraction is not perfect. So even after subtracting the measured bias
-current, you can usually still identify the boundaries of different
-amplifiers by eye. See Figure 11(a) in Akhlaghi and Ichikawa (2015) for an
-example. This results in the final reduced data to have non-uniform
-amplifier-shaped regions with higher or lower background flux values. Such
-systematic biases will then propagate to all subsequent measurements we do
-on the data (for example photometry and subsequent stellar mass and star
-formation rate measurements in the case of galaxies).
-
-Therefore an accurate analysis requires a two layer tessellation: the top
-layer contains larger tiles, each covering one amplifier channel. For
-clarity we'll call these larger tiles ``channels''. The number of channels
-along each dimension is defined through the @option{--numchannels}. Each
-channel is then covered by its own individual smaller tessellation (with
-tile sizes determined by the @option{--tilesize} option). This will allow
-independent analysis of two adjacent pixels from different channels if
-necessary. If the image is processed or the detector only has one
-amplifier, you can set the number of channels in both dimension to 1.
-
-The final tessellation can be inspected on the image with the
-@option{--checktiles} option that is available to all programs which use
-tessellation for localized operations. When this option is called, a FITS
-file with a @file{_tiled.fits} suffix will be created along with the
-outputs, see @ref{Automatic output}. Each pixel in this image has the
-number of the tile that covers it. If the number of channels in any
-dimension are larger than unity, you will notice that the tile IDs are
-defined such that the first channels is covered first, then the second and
-so on. For the full list of processing-related common options (including
-tessellation options), please see @ref{Processing options}.
+The bias current on each amplifier is different, and initial bias subtraction
is not perfect.
+So even after subtracting the measured bias current, you can usually still
identify the boundaries of different amplifiers by eye.
+See Figure 11(a) in Akhlaghi and Ichikawa (2015) for an example.
+This results in the final reduced data to have non-uniform amplifier-shaped
regions with higher or lower background flux values.
+Such systematic biases will then propagate to all subsequent measurements we
do on the data (for example photometry and subsequent stellar mass and star
formation rate measurements in the case of galaxies).
+
+Therefore an accurate analysis requires a two layer tessellation: the top
layer contains larger tiles, each covering one amplifier channel.
+For clarity we'll call these larger tiles ``channels''.
+The number of channels along each dimension is defined through the
@option{--numchannels}.
+Each channel is then covered by its own individual smaller tessellation (with
tile sizes determined by the @option{--tilesize} option).
+This will allow independent analysis of two adjacent pixels from different
channels if necessary.
+If the image is processed or the detector only has one amplifier, you can set
the number of channels in both dimension to 1.
+
+The final tessellation can be inspected on the image with the
@option{--checktiles} option that is available to all programs which use
tessellation for localized operations.
+When this option is called, a FITS file with a @file{_tiled.fits} suffix will
be created along with the outputs, see @ref{Automatic output}.
+Each pixel in this image has the number of the tile that covers it.
+If the number of channels in any dimension are larger than unity, you will
notice that the tile IDs are defined such that the first channels is covered
first, then the second and so on.
+For the full list of processing-related common options (including tessellation
options), please see @ref{Processing options}.
@@ -9891,38 +7350,21 @@ tessellation options), please see @ref{Processing
options}.
@cindex Automatic output file names
@cindex Output file names, automatic
@cindex Setting output file names automatically
-All the programs in Gnuastro are designed such that specifying an output
-file or directory (based on the program context) is optional. When no
-output name is explicitly given (with @option{--output}, see @ref{Input
-output options}), the programs will automatically set an output name based
-on the input name(s) and what the program does. For example when you are
-using ConvertType to save FITS image named @file{dataset.fits} to a JPEG
-image and don't specify a name for it, the JPEG output file will be name
-@file{dataset.jpg}. When the input is from the standard input (for example
-a pipe, see @ref{Standard input}), and @option{--output} isn't given, the
-output name will be the program's name (for example
-@file{converttype.jpg}).
+All the programs in Gnuastro are designed such that specifying an output file
or directory (based on the program context) is optional.
+When no output name is explicitly given (with @option{--output}, see
@ref{Input output options}), the programs will automatically set an output name
based on the input name(s) and what the program does.
+For example when you are using ConvertType to save FITS image named
@file{dataset.fits} to a JPEG image and don't specify a name for it, the JPEG
output file will be name @file{dataset.jpg}.
+When the input is from the standard input (for example a pipe, see
@ref{Standard input}), and @option{--output} isn't given, the output name will
be the program's name (for example @file{converttype.jpg}).
@vindex --keepinputdir
-Another very important part of the automatic output generation is that all
-the directory information of the input file name is stripped off of
-it. This feature can be disabled with the @option{--keepinputdir} option,
-see @ref{Input output options}. It is the default because astronomical data
-are usually very large and organized specially with special file names. In
-some cases, the user might not have write permissions in those
-directories@footnote{In fact, even if the data is stored on your own
-computer, it is advised to only grant write permissions to the super user
-or root. This way, you won't accidentally delete or modify your valuable
-data!}.
-
-Let's assume that we are working on a report and want to process the
-FITS images from two projects (ABC and DEF), which are stored in the
-sub-directories named @file{ABCproject/} and @file{DEFproject/} of our
-top data directory (@file{/mnt/data}). The following shell commands
-show how one image from the former is first converted to a JPEG image
-through ConvertType and then the objects from an image in the latter
-project are detected using NoiseChisel. The text after the @command{#}
-sign are comments (not typed!).
+Another very important part of the automatic output generation is that all the
directory information of the input file name is stripped off of it.
+This feature can be disabled with the @option{--keepinputdir} option, see
@ref{Input output options}.
+It is the default because astronomical data are usually very large and
organized specially with special file names.
+In some cases, the user might not have write permissions in those
directories@footnote{In fact, even if the data is stored on your own computer,
it is advised to only grant write permissions to the super user or root.
+This way, you won't accidentally delete or modify your valuable data!}.
+
+Let's assume that we are working on a report and want to process the FITS
images from two projects (ABC and DEF), which are stored in the sub-directories
named @file{ABCproject/} and @file{DEFproject/} of our top data directory
(@file{/mnt/data}).
+The following shell commands show how one image from the former is first
converted to a JPEG image through ConvertType and then the objects from an
image in the latter project are detected using NoiseChisel.
+The text after the @command{#} sign are comments (not typed!).
@example
$ pwd # Current location
@@ -9951,119 +7393,81 @@ ABC01.jpg ABC02.jpg DEF01_detected.fits
@cindex FITS
@cindex Output FITS headers
@cindex CFITSIO version on outputs
-The output of many of Gnuastro's programs are (or can be) FITS files. The
-FITS format has many useful features for storing scientific datasets
-(cubes, images and tables) along with a robust features for
-archivability. For more on this standard, please see @ref{Fits}.
-
-As a community convention described in @ref{Fits}, the first extension of
-all FITS files produced by Gnuastro's programs only contains the meta-data
-that is intended for the file's extension(s). For a Gnuastro program, this
-generic meta-data (that is stored as FITS keyword records) is its
-configuration when it produced this dataset: file name(s) of input(s) and
-option names, values and comments. Note that when the configuration is too
-trivial (only input filename, for example the program @ref{Table}) no
-meta-data is written in this extension.
-
-FITS keywords have the following limitations in regards to generic option
-names and values which are described below:
+The output of many of Gnuastro's programs are (or can be) FITS files.
+The FITS format has many useful features for storing scientific datasets
(cubes, images and tables) along with a robust features for archivability.
+For more on this standard, please see @ref{Fits}.
+
+As a community convention described in @ref{Fits}, the first extension of all
FITS files produced by Gnuastro's programs only contains the meta-data that is
intended for the file's extension(s).
+For a Gnuastro program, this generic meta-data (that is stored as FITS keyword
records) is its configuration when it produced this dataset: file name(s) of
input(s) and option names, values and comments.
+Note that when the configuration is too trivial (only input filename, for
example the program @ref{Table}) no meta-data is written in this extension.
+
+FITS keywords have the following limitations in regards to generic option
names and values which are described below:
@itemize
@item
-If a keyword (option name) is longer than 8 characters, the first word in
-the record (80 character line) is @code{HIERARCH} which is followed by the
-keyword name.
+If a keyword (option name) is longer than 8 characters, the first word in the
record (80 character line) is @code{HIERARCH} which is followed by the keyword
name.
@item
-Values can be at most 75 characters, but for strings, this changes to 73
-(because of the two extra @key{'} characters that are necessary). However,
-if the value is a file name, containing slash (@key{/}) characters to
-separate directories, Gnuastro will break the value into multiple keywords.
+Values can be at most 75 characters, but for strings, this changes to 73
(because of the two extra @key{'} characters that are necessary).
+However, if the value is a file name, containing slash (@key{/}) characters to
separate directories, Gnuastro will break the value into multiple keywords.
@item
Keyword names ignore case, therefore they are all in capital letters.
-Therefore, if you want to use Grep to inspect these keywords, use the
-@option{-i} option, like the example below.
+Therefore, if you want to use Grep to inspect these keywords, use the
@option{-i} option, like the example below.
@example
$ astfits image_detected.fits -h0 | grep -i snquant
@end example
@end itemize
-The keywords above are classified (separated by an empty line and title) as
-a group titled ``ProgramName configuration''. This meta-data extension, as
-well as all the other extensions (which contain data), also contain have
-final group of keywords to keep the basic date and version information of
-Gnuastro, its dependencies and the pipeline that is using Gnuastro (if its
-under version control).
+The keywords above are classified (separated by an empty line and title) as a
group titled ``ProgramName configuration''.
+This meta-data extension, as well as all the other extensions (which contain
data), also contain have final group of keywords to keep the basic date and
version information of Gnuastro, its dependencies and the pipeline that is
using Gnuastro (if its under version control).
@table @command
@item DATE
-The creation time of the FITS file. This date is written directly by
-CFITSIO and is in UT format.
+The creation time of the FITS file.
+This date is written directly by CFITSIO and is in UT format.
@item COMMIT
-Git's commit description from the running directory of Gnuastro's
-programs. If the running directory is not version controlled or
-@file{libgit2} isn't installed (see @ref{Optional dependencies}) then this
-keyword will not be present. The printed value is equivalent to the output
-of the following command:
+Git's commit description from the running directory of Gnuastro's programs.
+If the running directory is not version controlled or @file{libgit2} isn't
installed (see @ref{Optional dependencies}) then this keyword will not be
present.
+The printed value is equivalent to the output of the following command:
@example
git describe --dirty --always
@end example
-If the running directory contains non-committed work, then the stored value
-will have a `@command{-dirty}' suffix. This can be very helpful to let you
-know that the data is not ready to be shared with collaborators or
-submitted to a journal. You should only share results that are produced
-after all your work is committed (safely stored in the version controlled
-history and thus reproducible).
-
-At first sight, version control appears to be mainly a tool for software
-developers. However progress in a scientific research is almost identical
-to progress in software development: first you have a rough idea that
-starts with handful of easy steps. But as the first results appear to be
-promising, you will have to extend, or generalize, it to make it more
-robust and work in all the situations your research covers, not just your
-first test samples. Slowly you will find wrong assumptions or bad
-implementations that need to be fixed (`bugs' in software development
-parlance). Finally, when you submit the research to your collaborators or a
-journal, many comments and suggestions will come in, and you have to
-address them.
-
-Software developers have created version control systems precisely for this
-kind of activity. Each significant moment in the project's history is
-called a ``commit'', see @ref{Version controlled source}. A snapshot of the
-project in each ``commit'' is safely stored away, so you can revert back to
-it at a later time, or check changes/progress. This way, you can be sure
-that your work is reproducible and track the progress and history. With
-version control, experimentation in the project's analysis is greatly
-facilitated, since you can easily revert back if a brainstorm test
-procedure fails.
-
-One important feature of version control is that the research result (FITS
-image, table, report or paper) can be stamped with the unique commit
-information that produced it. This information will enable you to exactly
-reproduce that same result later, even if you have made
-changes/progress. For one example of a research paper's reproduction
-pipeline, please see the
-@url{https://gitlab.com/makhlaghi/NoiseChisel-paper, reproduction pipeline}
-of the @url{https://arxiv.org/abs/1505.01664, paper} describing
-@ref{NoiseChisel}.
+If the running directory contains non-committed work, then the stored value
will have a `@command{-dirty}' suffix.
+This can be very helpful to let you know that the data is not ready to be
shared with collaborators or submitted to a journal.
+You should only share results that are produced after all your work is
committed (safely stored in the version controlled history and thus
reproducible).
+
+At first sight, version control appears to be mainly a tool for software
developers.
+However progress in a scientific research is almost identical to progress in
software development: first you have a rough idea that starts with handful of
easy steps.
+But as the first results appear to be promising, you will have to extend, or
generalize, it to make it more robust and work in all the situations your
research covers, not just your first test samples.
+Slowly you will find wrong assumptions or bad implementations that need to be
fixed (`bugs' in software development parlance).
+Finally, when you submit the research to your collaborators or a journal, many
comments and suggestions will come in, and you have to address them.
+
+Software developers have created version control systems precisely for this
kind of activity.
+Each significant moment in the project's history is called a ``commit'', see
@ref{Version controlled source}.
+A snapshot of the project in each ``commit'' is safely stored away, so you can
revert back to it at a later time, or check changes/progress.
+This way, you can be sure that your work is reproducible and track the
progress and history.
+With version control, experimentation in the project's analysis is greatly
facilitated, since you can easily revert back if a brainstorm test procedure
fails.
+
+One important feature of version control is that the research result (FITS
image, table, report or paper) can be stamped with the unique commit
information that produced it.
+This information will enable you to exactly reproduce that same result later,
even if you have made changes/progress.
+For one example of a research paper's reproduction pipeline, please see the
@url{https://gitlab.com/makhlaghi/NoiseChisel-paper, reproduction pipeline} of
the @url{https://arxiv.org/abs/1505.01664, paper} describing @ref{NoiseChisel}.
@item CFITSIO
The version of CFITSIO used (see @ref{CFITSIO}).
@item WCSLIB
-The version of WCSLIB used (see @ref{WCSLIB}). Note that older versions of
-WCSLIB do not report the version internally. So this is only available if
-you are using more recent WCSLIB versions.
+The version of WCSLIB used (see @ref{WCSLIB}).
+Note that older versions of WCSLIB do not report the version internally.
+So this is only available if you are using more recent WCSLIB versions.
@item GSL
-The version of GNU Scientific Library that was used, see @ref{GNU
-Scientific Library}.
+The version of GNU Scientific Library that was used, see @ref{GNU Scientific
Library}.
@item GNUASTRO
The version of Gnuastro used (see @ref{Version numbering}).
@@ -10101,43 +7505,27 @@ END
@cindex File operations
@cindex Operations on files
@cindex General file operations
-The most low-level and basic property of a dataset is how it is stored. To
-process, archive and transmit the data, you need a container to store it
-first. From the start of the computer age, different formats have been
-defined to store data, optimized for particular applications. One
-format/container can never be useful for all applications: the storage
-defines the application and vice-versa. In astronomy, the Flexible Image
-Transport System (FITS) standard has become the most common format of data
-storage and transmission. It has many useful features, for example multiple
-sub-containers (also known as extensions or header data units, HDUs) within
-one file, or support for tables as well as images. Each HDU can store an
-independent dataset and its corresponding meta-data. Therefore, Gnuastro
-has one program (see @ref{Fits}) specifically designed to manipulate FITS
-HDUs and the meta-data (header keywords) in each HDU.
-
-Your astronomical research does not just involve data analysis (where the
-FITS format is very useful). For example you want to demonstrate your raw
-and processed FITS images or spectra as figures within slides, reports, or
-papers. The FITS format is not defined for such applications. Thus,
-Gnuastro also comes with the ConvertType program (see @ref{ConvertType})
-which can be used to convert a FITS image to and from (where possible)
-other formats like plain text and JPEG (which allow two way conversion),
-along with EPS and PDF (which can only be created from FITS, not the other
-way round).
-
-Finally, the FITS format is not just for images, it can also store
-tables. Binary tables in particular can be very efficient in storing
-catalogs that have more than a few tens of columns and rows. However,
-unlike images (where all elements/pixels have one data type), tables
-contain multiple columns and each column can have different properties:
-independent data types (see @ref{Numeric data types}) and meta-data. In
-practice, each column can be viewed as a separate container that is grouped
-with others in the table. The only shared property of the columns in a table
-is thus the number of elements they contain. To allow easy
-inspection/manipulation of table columns, Gnuastro has the Table program
-(see @ref{Table}). It can be used to select certain table columns in a FITS
-table and see them as a human readable output on the command-line, or to
-save them into another plain text or FITS table.
+The most low-level and basic property of a dataset is how it is stored.
+To process, archive and transmit the data, you need a container to store it
first.
+From the start of the computer age, different formats have been defined to
store data, optimized for particular applications.
+One format/container can never be useful for all applications: the storage
defines the application and vice-versa.
+In astronomy, the Flexible Image Transport System (FITS) standard has become
the most common format of data storage and transmission.
+It has many useful features, for example multiple sub-containers (also known
as extensions or header data units, HDUs) within one file, or support for
tables as well as images.
+Each HDU can store an independent dataset and its corresponding meta-data.
+Therefore, Gnuastro has one program (see @ref{Fits}) specifically designed to
manipulate FITS HDUs and the meta-data (header keywords) in each HDU.
+
+Your astronomical research does not just involve data analysis (where the FITS
format is very useful).
+For example you want to demonstrate your raw and processed FITS images or
spectra as figures within slides, reports, or papers.
+The FITS format is not defined for such applications.
+Thus, Gnuastro also comes with the ConvertType program (see @ref{ConvertType})
which can be used to convert a FITS image to and from (where possible) other
formats like plain text and JPEG (which allow two way conversion), along with
EPS and PDF (which can only be created from FITS, not the other way round).
+
+Finally, the FITS format is not just for images, it can also store tables.
+Binary tables in particular can be very efficient in storing catalogs that
have more than a few tens of columns and rows.
+However, unlike images (where all elements/pixels have one data type), tables
contain multiple columns and each column can have different properties:
independent data types (see @ref{Numeric data types}) and meta-data.
+In practice, each column can be viewed as a separate container that is grouped
with others in the table.
+The only shared property of the columns in a table is thus the number of
elements they contain.
+To allow easy inspection/manipulation of table columns, Gnuastro has the Table
program (see @ref{Table}).
+It can be used to select certain table columns in a FITS table and see them as
a human readable output on the command-line, or to save them into another plain
text or FITS table.
@menu
* Fits:: View and manipulate extensions and keywords.
@@ -10154,70 +7542,38 @@ save them into another plain text or FITS table.
@section Fits
@cindex Vatican library
-The ``Flexible Image Transport System'', or FITS, is by far the most common
-data container format in astronomy and in constant use since the
-1970s. Archiving (future usage, simplicity) has been one of the primary
-design principles of this format. In the last few decades it has proved so
-useful and robust that the Vatican Library has also chosen FITS for its
-``long-term digital preservation''
-project@footnote{@url{https://www.vaticanlibrary.va/home.php?pag=progettodigit}}.
+The ``Flexible Image Transport System'', or FITS, is by far the most common
data container format in astronomy and in constant use since the 1970s.
+Archiving (future usage, simplicity) has been one of the primary design
principles of this format.
+In the last few decades it has proved so useful and robust that the Vatican
Library has also chosen FITS for its ``long-term digital preservation''
project@footnote{@url{https://www.vaticanlibrary.va/home.php?pag=progettodigit}}.
@cindex IAU, international astronomical union
-Although the full name of the standard invokes the idea that it is only for
-images, it also contains complete and robust features for tables. It
-started off in the 1970s and was formally published as a standard in 1981,
-it was adopted by the International Astronomical Union (IAU) in 1982 and an
-IAU working group to maintain its future was defined in 1988. The FITS 2.0
-and 3.0 standards were approved in 2000 and 2008 respectively, and the 4.0
-draft has also been released recently, please see the
-@url{https://fits.gsfc.nasa.gov/fits_standard.html, FITS standard document
-webpage} for the full text of all versions. Also see the
-@url{https://doi.org/10.1051/0004-6361/201015362, FITS 3.0 standard paper}
-for a nice introduction and history along with the full standard.
+Although the full name of the standard invokes the idea that it is only for
images, it also contains complete and robust features for tables.
+It started off in the 1970s and was formally published as a standard in 1981,
it was adopted by the International Astronomical Union (IAU) in 1982 and an IAU
working group to maintain its future was defined in 1988.
+The FITS 2.0 and 3.0 standards were approved in 2000 and 2008 respectively,
and the 4.0 draft has also been released recently, please see the
@url{https://fits.gsfc.nasa.gov/fits_standard.html, FITS standard document
webpage} for the full text of all versions.
+Also see the @url{https://doi.org/10.1051/0004-6361/201015362, FITS 3.0
standard paper} for a nice introduction and history along with the full
standard.
@cindex Meta-data
-Many common image formats, for example a JPEG, only have one image/dataset
-per file, however one great advantage of the FITS standard is that it
-allows you to keep multiple datasets (images or tables along with their
-separate meta-data) in one file. In the FITS standard, each data + metadata
-is known as an extension, or more formally a header data unit or HDU. The
-HDUs in a file can be completely independent: you can have multiple images
-of different dimensions/sizes or tables as separate extensions in one
-file. However, while the standard doesn't impose any constraints on the
-relation between the datasets, it is strongly encouraged to group data that
-are contextually related with each other in one file. For example an image
-and the table/catalog of objects and their measured properties in that
-image. Other examples can be images of one patch of sky in different colors
-(filters), or one raw telescope image along with its calibration data
-(tables or images).
-
-As discussed above, the extensions in a FITS file can be completely
-independent. To keep some information (meta-data) about the group of
-extensions in the FITS file, the community has adopted the following
-convention: put no data in the first extension, so it is just
-meta-data. This extension can thus be used to store Meta-data regarding the
-whole file (grouping of extensions). Subsequent extensions may contain data
-along with their own separate meta-data. All of Gnuastro's programs also
-follow this convention: the main output dataset(s) are placed in the second
-(or later) extension(s). The first extension contains no data the program's
-configuration (input file name, along with all its option values) are
-stored as its meta-data, see @ref{Output FITS files}.
-
-The meta-data contain information about the data, for example which region
-of the sky an image corresponds to, the units of the data, what telescope,
-camera, and filter the data were taken with, it observation date, or the
-software that produced it and its configuration. Without the meta-data, the
-raw dataset is practically just a collection of numbers and really hard to
-understand, or connect with the real world (other datasets). It is thus
-strongly encouraged to supplement your data (at any level of processing)
-with as much meta-data about your processing/science as possible.
-
-The meta-data of a FITS file is in ASCII format, which can be easily viewed
-or edited with a text editor or on the command-line. Each meta-data element
-(known as a keyword generally) is composed of a name, value, units and
-comments (the last two are optional). For example below you can see three
-FITS meta-data keywords for specifying the world coordinate system (WCS, or
-its location in the sky) of a dataset:
+Many common image formats, for example a JPEG, only have one image/dataset per
file, however one great advantage of the FITS standard is that it allows you to
keep multiple datasets (images or tables along with their separate meta-data)
in one file.
+In the FITS standard, each data + metadata is known as an extension, or more
formally a header data unit or HDU.
+The HDUs in a file can be completely independent: you can have multiple images
of different dimensions/sizes or tables as separate extensions in one file.
+However, while the standard doesn't impose any constraints on the relation
between the datasets, it is strongly encouraged to group data that are
contextually related with each other in one file.
+For example an image and the table/catalog of objects and their measured
properties in that image.
+Other examples can be images of one patch of sky in different colors
(filters), or one raw telescope image along with its calibration data (tables
or images).
+
+As discussed above, the extensions in a FITS file can be completely
independent.
+To keep some information (meta-data) about the group of extensions in the FITS
file, the community has adopted the following convention: put no data in the
first extension, so it is just meta-data.
+This extension can thus be used to store Meta-data regarding the whole file
(grouping of extensions).
+Subsequent extensions may contain data along with their own separate meta-data.
+All of Gnuastro's programs also follow this convention: the main output
dataset(s) are placed in the second (or later) extension(s).
+The first extension contains no data the program's configuration (input file
name, along with all its option values) are stored as its meta-data, see
@ref{Output FITS files}.
+
+The meta-data contain information about the data, for example which region of
the sky an image corresponds to, the units of the data, what telescope, camera,
and filter the data were taken with, it observation date, or the software that
produced it and its configuration.
+Without the meta-data, the raw dataset is practically just a collection of
numbers and really hard to understand, or connect with the real world (other
datasets).
+It is thus strongly encouraged to supplement your data (at any level of
processing) with as much meta-data about your processing/science as possible.
+
+The meta-data of a FITS file is in ASCII format, which can be easily viewed or
edited with a text editor or on the command-line.
+Each meta-data element (known as a keyword generally) is composed of a name,
value, units and comments (the last two are optional).
+For example below you can see three FITS meta-data keywords for specifying the
world coordinate system (WCS, or its location in the sky) of a dataset:
@example
LATPOLE = -27.805089 / [deg] Native latitude of celestial pole
@@ -10225,21 +7581,13 @@ RADESYS = 'FK5' / Equatorial coordinate
system
EQUINOX = 2000.0 / [yr] Equinox of equatorial coordinates
@end example
-However, there are some limitations which discourage viewing/editing the
-keywords with text editors. For example there is a fixed length of 80
-characters for each keyword (its name, value, units and comments) and there
-are no new-line characters, so on a text editor all the keywords are seen
-in one line. Also, the meta-data keywords are immediately followed by the
-data which are commonly in binary format and will show up as strange
-looking characters on a text editor, and significantly slowing down the
-processor.
+However, there are some limitations which discourage viewing/editing the
keywords with text editors.
+For example there is a fixed length of 80 characters for each keyword (its
name, value, units and comments) and there are no new-line characters, so on a
text editor all the keywords are seen in one line.
+Also, the meta-data keywords are immediately followed by the data which are
commonly in binary format and will show up as strange looking characters on a
text editor, and significantly slowing down the processor.
-Gnuastro's Fits program was designed to allow easy manipulation of FITS
-extensions and meta-data keywords on the command-line while conforming
-fully with the FITS standard. For example you can copy or cut (copy and
-remove) HDUs/extensions from one FITS file to another, or completely delete
-them. It also has features to delete, add, or edit meta-data keywords
-within one HDU.
+Gnuastro's Fits program was designed to allow easy manipulation of FITS
extensions and meta-data keywords on the command-line while conforming fully
with the FITS standard.
+For example you can copy or cut (copy and remove) HDUs/extensions from one
FITS file to another, or completely delete them.
+It also has features to delete, add, or edit meta-data keywords within one HDU.
@menu
* Invoking astfits:: Arguments and options to Header.
@@ -10248,9 +7596,8 @@ within one HDU.
@node Invoking astfits, , Fits, Fits
@subsection Invoking Fits
-Fits can print or manipulate the FITS file HDUs (extensions), meta-data
-keywords in a given HDU. The executable name is @file{astfits} with the
-following general template
+Fits can print or manipulate the FITS file HDUs (extensions), meta-data
keywords in a given HDU.
+The executable name is @file{astfits} with the following general template
@example
$ astfits [OPTION...] ASTRdata
@@ -10287,29 +7634,15 @@ $ astfits --write=MYKEY1,20.00,"An example keyword"
--write=MYKEY2,fd
@end example
@cindex HDU
-When no action is requested (and only a file name is given), Fits will
-print a list of information about the extension(s) in the file. This
-information includes the HDU number, HDU name (@code{EXTNAME} keyword),
-type of data (see @ref{Numeric data types}, and the number of data elements
-it contains (size along each dimension for images and table rows and
-columns). You can use this to get a general idea of the contents of the
-FITS file and what HDU to use for further processing, either with the Fits
-program or any other Gnuastro program.
-
-Here is one example of information about a FITS file with four extensions:
-the first extension has no data, it is a purely meta-data HDU (commonly
-used to keep meta-data about the whole file, or grouping of extensions, see
-@ref{Fits}). The second extension is an image with name @code{IMAGE} and
-single precision floating point type (@code{float32}, see @ref{Numeric data
-types}), it has 4287 pixels along its first (horizontal) axis and 4286
-pixels along its second (vertical) axis. The third extension is also an
-image with name @code{MASK}. It is in 2-byte integer format (@code{int16})
-which is commonly used to keep information about pixels (for example to
-identify which ones were saturated, or which ones had cosmic rays and so
-on), note how it has the same size as the @code{IMAGE} extension. The third
-extension is a binary table called @code{CATALOG} which has 12371 rows and
-5 columns (it probably contains information about the sources in the
-image).
+When no action is requested (and only a file name is given), Fits will print a
list of information about the extension(s) in the file.
+This information includes the HDU number, HDU name (@code{EXTNAME} keyword),
type of data (see @ref{Numeric data types}, and the number of data elements it
contains (size along each dimension for images and table rows and columns).
+You can use this to get a general idea of the contents of the FITS file and
what HDU to use for further processing, either with the Fits program or any
other Gnuastro program.
+
+Here is one example of information about a FITS file with four extensions: the
first extension has no data, it is a purely meta-data HDU (commonly used to
keep meta-data about the whole file, or grouping of extensions, see @ref{Fits}).
+The second extension is an image with name @code{IMAGE} and single precision
floating point type (@code{float32}, see @ref{Numeric data types}), it has 4287
pixels along its first (horizontal) axis and 4286 pixels along its second
(vertical) axis.
+The third extension is also an image with name @code{MASK}.
+It is in 2-byte integer format (@code{int16}) which is commonly used to keep
information about pixels (for example to identify which ones were saturated, or
which ones had cosmic rays and so on), note how it has the same size as the
@code{IMAGE} extension.
+The third extension is a binary table called @code{CATALOG} which has 12371
rows and 5 columns (it probably contains information about the sources in the
image).
@example
GNU Astronomy Utilities X.X
@@ -10327,26 +7660,16 @@ HDU (extension) information: `image.fits'.
3 CATALOG table_binary 12371x5
@end example
-If a specific HDU is identified on the command-line with the @option{--hdu}
-(or @option{-h} option) and no operation requested, then the full list of
-header keywords in that HDU will be printed (as if the
-@option{--printallkeys} was called, see below). It is important to remember
-that this only occurs when @option{--hdu} is given on the command-line. The
-@option{--hdu} value given in a configuration file will only be used when a
-specific operation on keywords requested. Therefore as described in the
-paragraphs above, when no explicit call to the @option{--hdu} option is
-made on the command-line and no operation is requested (on the command-line
-or configuration files), the basic information of each HDU/extension is
-printed.
-
-The operating mode and input/output options to Fits are similar to the
-other programs and fully described in @ref{Common options}. The options
-particular to Fits can be divided into two groups: 1) those related to
-modifying HDUs or extensions (see @ref{HDU manipulation}), and 2) those
-related to viewing/modifying meta-data keywords (see @ref{Keyword
-manipulation}). These two classes of options cannot be called together in
-one run: you can either work on the extensions or meta-data keywords in any
-instance of Fits.
+If a specific HDU is identified on the command-line with the @option{--hdu}
(or @option{-h} option) and no operation requested, then the full list of
header keywords in that HDU will be printed (as if the @option{--printallkeys}
was called, see below).
+It is important to remember that this only occurs when @option{--hdu} is given
on the command-line.
+The @option{--hdu} value given in a configuration file will only be used when
a specific operation on keywords requested.
+Therefore as described in the paragraphs above, when no explicit call to the
@option{--hdu} option is made on the command-line and no operation is requested
(on the command-line or configuration files), the basic information of each
HDU/extension is printed.
+
+The operating mode and input/output options to Fits are similar to the other
programs and fully described in @ref{Common options}.
+The options particular to Fits can be divided into two groups:
+1) those related to modifying HDUs or extensions (see @ref{HDU manipulation}),
and
+2) those related to viewing/modifying meta-data keywords (see @ref{Keyword
manipulation}).
+These two classes of options cannot be called together in one run: you can
either work on the extensions or meta-data keywords in any instance of Fits.
@menu
* HDU manipulation:: Manipulate HDUs within a FITS file.
@@ -10359,39 +7682,29 @@ instance of Fits.
@node HDU manipulation, Keyword manipulation, Invoking astfits, Invoking
astfits
@subsubsection HDU manipulation
-Each header data unit, or HDU (also known as an extension), in a FITS file
-is an independent dataset (data + meta-data). Multiple HDUs can be stored
-in one FITS file, see @ref{Fits}. The HDU modifying options to the Fits
-program are listed below.
-
-These options may be called multiple times in one run. If so, the
-extensions will be copied from the input FITS file to the output FITS file
-in the given order (on the command-line and also in configuration files,
-see @ref{Configuration file precedence}). If the separate classes are
-called together in one run of Fits, then first @option{--copy} is run (on
-all specified HDUs), followed by @option{--cut} (again on all specified
-HDUs), and then @option{--remove} (on all specified HDUs).
-
-The @option{--copy} and @option{--cut} options need an output FITS file
-(specified with the @option{--output} option). If the output file exists,
-then the specified HDU will be copied following the last extension of the
-output file (the existing HDUs in it will be untouched). Thus, after Fits
-finishes, the copied HDU will be the last HDU of the output file. If no
-output file name is given, then automatic output will be used to store the
-HDUs given to this option (see @ref{Automatic output}).
+Each header data unit, or HDU (also known as an extension), in a FITS file is
an independent dataset (data + meta-data).
+Multiple HDUs can be stored in one FITS file, see @ref{Fits}.
+The HDU modifying options to the Fits program are listed below.
+
+These options may be called multiple times in one run.
+If so, the extensions will be copied from the input FITS file to the output
FITS file in the given order (on the command-line and also in configuration
files, see @ref{Configuration file precedence}).
+If the separate classes are called together in one run of Fits, then first
@option{--copy} is run (on all specified HDUs), followed by @option{--cut}
(again on all specified HDUs), and then @option{--remove} (on all specified
HDUs).
+
+The @option{--copy} and @option{--cut} options need an output FITS file
(specified with the @option{--output} option).
+If the output file exists, then the specified HDU will be copied following the
last extension of the output file (the existing HDUs in it will be untouched).
+Thus, after Fits finishes, the copied HDU will be the last HDU of the output
file.
+If no output file name is given, then automatic output will be used to store
the HDUs given to this option (see @ref{Automatic output}).
@table @option
@item -n
@itemx --numhdus
-Print the number of extensions/HDUs in the given file. Note that this
-option must be called alone and will only print a single number. It is thus
-useful in scripts, for example when you need to do check the number of
-extensions in a FITS file.
+Print the number of extensions/HDUs in the given file.
+Note that this option must be called alone and will only print a single number.
+It is thus useful in scripts, for example when you need to do check the number
of extensions in a FITS file.
-For a complete list of basic meta-data on the extensions in a FITS file,
-don't use any of the options in this section or in @ref{Keyword
-manipulation}. For more, see @ref{Invoking astfits}.
+For a complete list of basic meta-data on the extensions in a FITS file, don't
use any of the options in this section or in @ref{Keyword manipulation}.
+For more, see @ref{Invoking astfits}.
@item -C STR
@itemx --copy=STR
@@ -10406,51 +7719,39 @@ output file, see explanations above.
@itemx --remove=STR
Remove the specified HDU from the input file.
-The first (zero-th) HDU cannot be removed with this option. Consider using
-@option{--copy} or @option{--cut} in combination with
-@option{primaryimghdu} to not have an empty zero-th HDU. From CFITSIO: ``In
-the case of deleting the primary array (the first HDU in the file) then
-[it] will be replaced by a null primary array containing the minimum set of
-required keywords and no data.''. So in practice, any existing data (array)
-and meta-data in the first extension will be removed, but the number of
-extensions in the file won't change. This is because of the unique position
-the first FITS extension has in the FITS standard (for example it cannot be
-used to store tables).
+The first (zero-th) HDU cannot be removed with this option.
+Consider using @option{--copy} or @option{--cut} in combination with
@option{primaryimghdu} to not have an empty zero-th HDU.
+From CFITSIO: ``In the case of deleting the primary array (the first HDU in
the file) then [it] will be replaced by a null primary array containing the
minimum set of required keywords and no data.''.
+So in practice, any existing data (array) and meta-data in the first extension
will be removed, but the number of extensions in the file won't change.
+This is because of the unique position the first FITS extension has in the
FITS standard (for example it cannot be used to store tables).
@item --primaryimghdu
-Copy or cut an image HDU to the zero-th HDU/extension a file that doesn't
-yet exist. This option is thus irrelevant if the output file already exists
-or the copied/cut extension is a FITS table.
+Copy or cut an image HDU to the zero-th HDU/extension a file that doesn't yet
exist.
+This option is thus irrelevant if the output file already exists or the
copied/cut extension is a FITS table.
@end table
@node Keyword manipulation, , HDU manipulation, Invoking astfits
@subsubsection Keyword manipulation
-The meta-data in each header data unit, or HDU (also known as extension,
-see @ref{Fits}) is stored as ``keyword''s. Each keyword consists of a name,
-value, unit, and comments. The Fits program (see @ref{Fits}) options
-related to viewing and manipulating keywords in a FITS HDU are described
-below.
-
-To see the full list of keywords in a FITS HDU, you can use the
-@option{--printallkeys} option. If any of the keywords are to be modified,
-the headers of the input file will be changed. If you want to keep the
-original FITS file or HDU, it is easiest to create a copy first and then
-run Fits on that. In the FITS standard, keywords are always uppercase. So
-case does not matter in the input or output keyword names you specify.
-
-Most of the options can accept multiple instances in one command. For
-example you can add multiple keywords to delete by calling
-@option{--delete} multiple times, since repeated keywords are allowed, you
-can even delete the same keyword multiple times. The action of such options
-will start from the top most keyword.
-
-The precedence of operations are described below. Note that while the order
-within each class of actions is preserved, the order of individual actions
-is not. So irrespective of what order you called @option{--delete} and
-@option{--update}. First, all the delete operations are going to take
-effect then the update operations.
+The meta-data in each header data unit, or HDU (also known as extension, see
@ref{Fits}) is stored as ``keyword''s.
+Each keyword consists of a name, value, unit, and comments.
+The Fits program (see @ref{Fits}) options related to viewing and manipulating
keywords in a FITS HDU are described below.
+
+To see the full list of keywords in a FITS HDU, you can use the
@option{--printallkeys} option.
+If any of the keywords are to be modified, the headers of the input file will
be changed.
+If you want to keep the original FITS file or HDU, it is easiest to create a
copy first and then run Fits on that.
+In the FITS standard, keywords are always uppercase.
+So case does not matter in the input or output keyword names you specify.
+
+Most of the options can accept multiple instances in one command.
+For example you can add multiple keywords to delete by calling
@option{--delete} multiple times, since repeated keywords are allowed, you can
even delete the same keyword multiple times.
+The action of such options will start from the top most keyword.
+
+The precedence of operations are described below.
+Note that while the order within each class of actions is preserved, the order
of individual actions is not.
+So irrespective of what order you called @option{--delete} and
@option{--update}.
+First, all the delete operations are going to take effect then the update
operations.
@enumerate
@item
@option{--delete}
@@ -10476,18 +7777,14 @@ effect then the update operations.
@option{--copykeys}
@end enumerate
@noindent
-All possible syntax errors will be reported before the keywords are
-actually written. FITS errors during any of these actions will be reported,
-but Fits won't stop until all the operations are complete. If
-@option{--quitonerror} is called, then Fits will immediately stop upon the
-first error.
+All possible syntax errors will be reported before the keywords are actually
written.
+FITS errors during any of these actions will be reported, but Fits won't stop
until all the operations are complete.
+If @option{--quitonerror} is called, then Fits will immediately stop upon the
first error.
@cindex GNU Grep
-If you want to inspect only a certain set of header keywords, it is easiest
-to pipe the output of the Fits program to GNU Grep. Grep is a very powerful
-and advanced tool to search strings which is precisely made for such
-situations. For example if you only want to check the size of an image FITS
-HDU, you can run:
+If you want to inspect only a certain set of header keywords, it is easiest to
pipe the output of the Fits program to GNU Grep.
+Grep is a very powerful and advanced tool to search strings which is precisely
made for such situations.
+For example if you only want to check the size of an image FITS HDU, you can
run:
@example
$ astfits input.fits | grep NAXIS
@@ -10495,14 +7792,10 @@ $ astfits input.fits | grep NAXIS
@cartouche
@noindent
-@strong{FITS STANDARD KEYWORDS:} Some header keywords are necessary
-for later operations on a FITS file, for example BITPIX or NAXIS, see
-the FITS standard for their full list. If you modify (for example
-remove or rename) such keywords, the FITS file extension might not be
-usable any more. Also be careful for the world coordinate system
-keywords, if you modify or change their values, any future world
-coordinate system (like RA and Dec) measurements on the image will
-also change.
+@strong{FITS STANDARD KEYWORDS:}
+Some header keywords are necessary for later operations on a FITS file, for
example BITPIX or NAXIS, see the FITS standard for their full list.
+If you modify (for example remove or rename) such keywords, the FITS file
extension might not be usable any more.
+Also be careful for the world coordinate system keywords, if you modify or
change their values, any future world coordinate system (like RA and Dec)
measurements on the image will also change.
@end cartouche
@@ -10512,14 +7805,11 @@ The keyword related options to the Fits program are
fully described below.
@item -a STR
@itemx --asis=STR
-Write @option{STR} exactly into the FITS file header with no
-modifications. If it does not conform to the FITS standards, then it might
-cause trouble, so please be very careful with this option. If you want to
-define the keyword from scratch, it is best to use the @option{--write}
-option (see below) and let CFITSIO worry about the standards. The best way
-to use this option is when you want to add a keyword from one FITS file to
-another unchanged and untouched. Below is an example of such a case that
-can be very useful sometimes (on the command-line or in scripts):
+Write @option{STR} exactly into the FITS file header with no modifications.
+If it does not conform to the FITS standards, then it might cause trouble, so
please be very careful with this option.
+If you want to define the keyword from scratch, it is best to use the
@option{--write} option (see below) and let CFITSIO worry about the standards.
+The best way to use this option is when you want to add a keyword from one
FITS file to another unchanged and untouched.
+Below is an example of such a case that can be very useful sometimes (on the
command-line or in scripts):
@example
$ key=$(astfits firstimage.fits | grep KEYWORD)
@@ -10527,43 +7817,32 @@ $ astfits --asis="$key" secondimage.fits
@end example
@cindex GNU Bash
-In particular note the double quotation signs (@key{"}) around the
-reference to the @command{key} shell variable (@command{$key}), since FITS
-keywords usually have lots of space characters, if this variable is not
-quoted, the shell will only give the first word in the full keyword to this
-option, which will definitely be a non-standard FITS keyword and will make
-it hard to work on the file afterwords. See the ``Quoting'' section of the
-GNU Bash manual for more information if your keyword has the special
-characters @key{$}, @key{`}, or @key{\}.
+In particular note the double quotation signs (@key{"}) around the reference
to the @command{key} shell variable (@command{$key}).
+FITS keywords usually have lots of space characters, if this variable is not
quoted, the shell will only give the first word in the full keyword to this
option, which will definitely be a non-standard FITS keyword and will make it
hard to work on the file afterwords.
+See the ``Quoting'' section of the GNU Bash manual for more information if
your keyword has the special characters @key{$}, @key{`}, or @key{\}.
@item -d STR
@itemx --delete=STR
-Delete one instance of the @option{STR} keyword from the FITS
-header. Multiple instances of @option{--delete} can be given (possibly even
-for the same keyword, when its repeated in the meta-data). All keywords
-given will be removed from the headers in the same given order. If the
-keyword doesn't exist, Fits will give a warning and return with a non-zero
-value, but will not stop. To stop as soon as an error occurs, run with
-@option{--quitonerror}.
+Delete one instance of the @option{STR} keyword from the FITS header.
+Multiple instances of @option{--delete} can be given (possibly even for the
same keyword, when its repeated in the meta-data).
+All keywords given will be removed from the headers in the same given order.
+If the keyword doesn't exist, Fits will give a warning and return with a
non-zero value, but will not stop.
+To stop as soon as an error occurs, run with @option{--quitonerror}.
@item -r STR
@itemx --rename=STR
-Rename a keyword to a new value. @option{STR} contains both the existing
-and new names, which should be separated by either a comma (@key{,}) or a
-space character. Note that if you use a space character, you have to put
-the value to this option within double quotation marks (@key{"}) so the
-space character is not interpreted as an option separator. Multiple
-instances of @option{--rename} can be given in one command. The keywords
-will be renamed in the specified order. If the keyword doesn't exist, Fits
-will give a warning and return with a non-zero value, but will not stop. To
-stop as soon as an error occurs, run with @option{--quitonerror}.
+Rename a keyword to a new value.
+@option{STR} contains both the existing and new names, which should be
separated by either a comma (@key{,}) or a space character.
+Note that if you use a space character, you have to put the value to this
option within double quotation marks (@key{"}) so the space character is not
interpreted as an option separator.
+Multiple instances of @option{--rename} can be given in one command.
+The keywords will be renamed in the specified order.
+If the keyword doesn't exist, Fits will give a warning and return with a
non-zero value, but will not stop.
+To stop as soon as an error occurs, run with @option{--quitonerror}.
@item -u STR
@itemx --update=STR
-Update a keyword, its value, its comments and its units in the format
-described below. If there are multiple instances of the keyword in the
-header, they will be changed from top to bottom (with multiple
-@option{--update} options).
+Update a keyword, its value, its comments and its units in the format
described below.
+If there are multiple instances of the keyword in the header, they will be
changed from top to bottom (with multiple @option{--update} options).
@noindent
The format of the values to this option can best be specified with an
@@ -10573,50 +7852,36 @@ example:
--update=KEYWORD,value,"comments for this keyword",unit
@end example
-If there is a writing error, Fits will give a warning and return with a
-non-zero value, but will not stop. To stop as soon as an error occurs, run
-with @option{--quitonerror}.
-
-@noindent
-The value can be any numerical or string value@footnote{Some tricky
-situations arise with values like `@command{87095e5}', if this was intended
-to be a number it will be kept in the header as @code{8709500000} and there
-is no problem. But this can also be a shortened Git commit hash. In the
-latter case, it should be treated as a string and stored as it is
-written. Commit hashes are very important in keeping the history of a file
-during your research and such values might arise without you noticing them
-in your reproduction pipeline. One solution is to use @command{git
-describe} instead of the short hash alone. A less recommended solution is
-to add a space after the commit hash and Fits will write the value as
-`@command{87095e5 }' in the header. If you later compare the strings on the
-shell, the space character will be ignored by the shell in the latter
-solution and there will be no problem.}. Other than the @code{KEYWORD}, all
-the other values are optional. To leave a given token empty, follow the
-preceding comma (@key{,}) immediately with the next. If any space character
-is present around the commas, it will be considered part of the respective
-token. So if more than one token has space characters within it, the safest
-method to specify a value to this option is to put double quotation marks
-around each individual token that needs it. Note that without double
-quotation marks, space characters will be seen as option separators and can
-lead to undefined behavior.
+If there is a writing error, Fits will give a warning and return with a
non-zero value, but will not stop.
+To stop as soon as an error occurs, run with @option{--quitonerror}.
+
+@noindent
+The value can be any numerical or string value@footnote{Some tricky situations
arise with values like `@command{87095e5}', if this was intended to be a number
it will be kept in the header as @code{8709500000} and there is no problem.
+But this can also be a shortened Git commit hash.
+In the latter case, it should be treated as a string and stored as it is
written.
+Commit hashes are very important in keeping the history of a file during your
research and such values might arise without you noticing them in your
reproduction pipeline.
+One solution is to use @command{git describe} instead of the short hash alone.
+A less recommended solution is to add a space after the commit hash and Fits
will write the value as `@command{87095e5 }' in the header.
+If you later compare the strings on the shell, the space character will be
ignored by the shell in the latter solution and there will be no problem.}.
+Other than the @code{KEYWORD}, all the other values are optional.
+To leave a given token empty, follow the preceding comma (@key{,}) immediately
with the next.
+If any space character is present around the commas, it will be considered
part of the respective token.
+So if more than one token has space characters within it, the safest method to
specify a value to this option is to put double quotation marks around each
individual token that needs it.
+Note that without double quotation marks, space characters will be seen as
option separators and can lead to undefined behavior.
@item -w STR
@itemx --write=STR
-Write a keyword to the header. For the possible value input formats,
-comments and units for the keyword, see the @option{--update} option
-above. The special names (first string) below will cause a special
-behavior:
+Write a keyword to the header.
+For the possible value input formats, comments and units for the keyword, see
the @option{--update} option above.
+The special names (first string) below will cause a special behavior:
@table @option
@item /
-Write a ``title'' to the list of keywords. A title consists of one blank
-line and another which is blank for several spaces and starts with a slash
-(@key{/}). The second string given to this option is the ``title'' or
-string printed after the slash. For example with the command below you can
-add a ``title'' of `My keywords' after the existing keywords and add the
-subsequent @code{K1} and @code{K2} keywords under it (note that keyword
-names are not case sensitive).
+Write a ``title'' to the list of keywords.
+A title consists of one blank line and another which is blank for several
spaces and starts with a slash (@key{/}).
+The second string given to this option is the ``title'' or string printed
after the slash.
+For example with the command below you can add a ``title'' of `My keywords'
after the existing keywords and add the subsequent @code{K1} and @code{K2}
keywords under it (note that keyword names are not case sensitive).
@example
$ astfits test.fits -h1 --write=/,"My keywords" \
@@ -10631,19 +7896,14 @@ K2 = 4.56 / My second keyword
END
@end example
-Adding a ``title'' before each contextually separate group of header
-keywords greatly helps in readability and visual inspection of the
-keywords. So generally, when you want to add new FITS keywords, its good
-practice to also add a title before them.
+Adding a ``title'' before each contextually separate group of header keywords
greatly helps in readability and visual inspection of the keywords.
+So generally, when you want to add new FITS keywords, its good practice to
also add a title before them.
-The reason you need to use @key{/} as the keyword name for setting a title
-is that @key{/} is the first non-white character.
+The reason you need to use @key{/} as the keyword name for setting a title is
that @key{/} is the first non-white character.
-The title(s) is(are) written into the FITS with the same order that
-@option{--write} is called. Therefore in one run of the Fits program, you
-can specify many different titles (with their own keywords under them). For
-example the command below that builds on the previous example and adds
-another group of keywords named @code{A1} and @code{A2}.
+The title(s) is(are) written into the FITS with the same order that
@option{--write} is called.
+Therefore in one run of the Fits program, you can specify many different
titles (with their own keywords under them).
+For example the command below that builds on the previous example and adds
another group of keywords named @code{A1} and @code{A2}.
@example
$ astfits test.fits -h1 --write=/,"My keywords" \
@@ -10658,97 +7918,70 @@ $ astfits test.fits -h1 --write=/,"My keywords" \
@cindex CFITSIO
@cindex @code{DATASUM}: FITS keyword
@cindex @code{CHECKSUM}: FITS keyword
-When nothing is given afterwards, the header integrity
-keywords@footnote{Section 4.4.2.7 (page 15) of
-@url{https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf}}
-@code{DATASUM} and @code{CHECKSUM} will be written/updated. They are
-calculated and written by CFITSIO. They thus comply with the FITS standard
-4.0 that defines these keywords.
-
-If a value is given (for example
-@option{--write=checksum,my-own-checksum,"my checksum"}), then CFITSIO
-won't be called to calculate these two keywords and the value (as well as
-possible comment and unit) will be written just like any other
-keyword. This is generally not recommended, but necessary in special
-circumstances (where the checksum needs to be manually updated for
-example).
-
-@code{DATASUM} only depends on the data section of the HDU/extension, so it
-is not changed when you update the keywords. But @code{CHECKSUM} also
-depends on the header and will not be valid if you make any further changes
-to the header. This includes any further keyword modification options in
-the same call to the Fits program. Therefore it is recommended to write
-these keywords as the last keywords that are written/modified in the
-extension. You can use the @option{--verify} option (described below) to
-verify the values of these two keywords.
+When nothing is given afterwards, the header integrity
keywords@footnote{Section 4.4.2.7 (page 15) of
@url{https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf}}
@code{DATASUM} and @code{CHECKSUM} will be written/updated.
+They are calculated and written by CFITSIO.
+They thus comply with the FITS standard 4.0 that defines these keywords.
+
+If a value is given (for example @option{--write=checksum,my-own-checksum,"my
checksum"}), then CFITSIO won't be called to calculate these two keywords and
the value (as well as possible comment and unit) will be written just like any
other keyword.
+This is generally not recommended, but necessary in special circumstances
(where the checksum needs to be manually updated for example).
+
+@code{DATASUM} only depends on the data section of the HDU/extension, so it is
not changed when you update the keywords.
+But @code{CHECKSUM} also depends on the header and will not be valid if you
make any further changes to the header.
+This includes any further keyword modification options in the same call to the
Fits program.
+Therefore it is recommended to write these keywords as the last keywords that
are written/modified in the extension.
+You can use the @option{--verify} option (described below) to verify the
values of these two keywords.
@item datasum
-Similar to @option{checksum}, but only write the @code{DATASUM} keyword
-(that doesn't depend on the header keywords, only the data).
+Similar to @option{checksum}, but only write the @code{DATASUM} keyword (that
doesn't depend on the header keywords, only the data).
@end table
@item -H STR
@itemx --history STR
-Add a @code{HISTORY} keyword to the header with the given value. A new
-@code{HISTORY} keyword will be created for every instance of this
-option. If the string given to this option is longer than 70 characters, it
-will be separated into multiple keyword cards. If there is an error, Fits
-will give a warning and return with a non-zero value, but will not stop. To
-stop as soon as an error occurs, run with @option{--quitonerror}.
+Add a @code{HISTORY} keyword to the header with the given value. A new
@code{HISTORY} keyword will be created for every instance of this option. If
the string given to this option is longer than 70 characters, it will be
separated into multiple keyword cards. If there is an error, Fits will give a
warning and return with a non-zero value, but will not stop. To stop as soon as
an error occurs, run with @option{--quitonerror}.
@item -c STR
@itemx --comment STR
-Add a @code{COMMENT} keyword to the header with the given value. Similar to
-the explanation for @option{--history} above.
+Add a @code{COMMENT} keyword to the header with the given value.
+Similar to the explanation for @option{--history} above.
@item -t
@itemx --date
-Put the current date and time in the header. If the @code{DATE} keyword
-already exists in the header, it will be updated. If there is a writing
-error, Fits will give a warning and return with a non-zero value, but will
-not stop. To stop as soon as an error occurs, run with
-@option{--quitonerror}.
+Put the current date and time in the header.
+If the @code{DATE} keyword already exists in the header, it will be updated.
+If there is a writing error, Fits will give a warning and return with a
non-zero value, but will not stop.
+To stop as soon as an error occurs, run with @option{--quitonerror}.
@item -p
@itemx --printallkeys
-Print all the keywords in the specified FITS extension (HDU) on the
-command-line. If this option is called along with any of the other keyword
-editing commands, as described above, all other editing commands take
-precedence to this. Therefore, it will print the final keywords after all
-the editing has been done.
+Print all the keywords in the specified FITS extension (HDU) on the
command-line.
+If this option is called along with any of the other keyword editing commands,
as described above, all other editing commands take precedence to this.
+Therefore, it will print the final keywords after all the editing has been
done.
@item -v
@itemx --verify
-Verify the @code{DATASUM} and @code{CHECKSUM} data integrity keywords of
-the FITS standard. See the description under the @code{checksum} (under
-@option{--write}, above) for more on these keywords.
+Verify the @code{DATASUM} and @code{CHECKSUM} data integrity keywords of the
FITS standard.
+See the description under the @code{checksum} (under @option{--write}, above)
for more on these keywords.
-This option will print @code{Verified} for both keywords if they can be
-verified. Otherwise, if they don't exist in the given HDU/extension, it
-will print @code{NOT-PRESENT}, and if they cannot be verified it will print
-@code{INCORRECT}. In the latter case (when the keyword values exist but
-can't be verified), the Fits program will also return with a failure.
+This option will print @code{Verified} for both keywords if they can be
verified.
+Otherwise, if they don't exist in the given HDU/extension, it will print
@code{NOT-PRESENT}, and if they cannot be verified it will print
@code{INCORRECT}.
+In the latter case (when the keyword values exist but can't be verified), the
Fits program will also return with a failure.
-By default this function will also print a short description of the
-@code{DATASUM} AND @code{CHECKSUM} keywords. You can suppress this extra
-information with @code{--quiet} option.
+By default this function will also print a short description of the
@code{DATASUM} AND @code{CHECKSUM} keywords.
+You can suppress this extra information with @code{--quiet} option.
@item --copykeys=INT:INT
Copy the input's keyword records in the given range (inclusive) to the
output HDU (specified with the @option{--output} and @option{--outhdu}
options, for the filename and HDU/extension respectively).
-The given string to this option must be two integers separated by a colon
-(@key{:}). The first integer must be positive (counting of the keyword
-records starts from 1). The second integer may be negative (zero is not
-acceptable) or an integer larger than the first.
+The given string to this option must be two integers separated by a colon
(@key{:}).
+The first integer must be positive (counting of the keyword records starts
from 1).
+The second integer may be negative (zero is not acceptable) or an integer
larger than the first.
-A negative second integer means counting from the end. So @code{-1} is the
-last copy-able keyword (not including the @code{END} keyword).
+A negative second integer means counting from the end.
+So @code{-1} is the last copy-able keyword (not including the @code{END}
keyword).
-To see the header keywords of the input with a number before them, you can
-pipe the output of the FITS program (when it prints all the keywords in an
-extension) into the @command{cat} program like below:
+To see the header keywords of the input with a number before them, you can
pipe the output of the FITS program (when it prints all the keywords in an
extension) into the @command{cat} program like below:
@example
$ astfits input.fits -h1 | cat -n
@@ -10759,36 +7992,24 @@ The HDU/extension to write the output keywords of
@option{--copykeys}.
@item -Q
@itemx --quitonerror
-Quit if any of the operations above are not successful. By default if
-an error occurs, Fits will warn the user of the faulty keyword and
-continue with the rest of actions.
+Quit if any of the operations above are not successful.
+By default if an error occurs, Fits will warn the user of the faulty keyword
and continue with the rest of actions.
@item -s STR
@itemx --datetosec STR
@cindex Unix epoch time
@cindex Time, Unix epoch
@cindex Epoch, Unix time
-Interpret the value of the given keyword in the FITS date format (most
-generally: @code{YYYY-MM-DDThh:mm:ss.ddd...}) and return the corresponding
-Unix epoch time (number of seconds that have passed since 00:00:00
-Thursday, January 1st, 1970). The @code{Thh:mm:ss.ddd...} section
-(specifying the time of day), and also the @code{.ddd...} (specifying the
-fraction of a second) are optional. The value to this option must be the
-FITS keyword name that contains the requested date, for example
-@option{--datetosec=DATE-OBS}.
+Interpret the value of the given keyword in the FITS date format (most
generally: @code{YYYY-MM-DDThh:mm:ss.ddd...}) and return the corresponding Unix
epoch time (number of seconds that have passed since 00:00:00 Thursday, January
1st, 1970).
+The @code{Thh:mm:ss.ddd...} section (specifying the time of day), and also the
@code{.ddd...} (specifying the fraction of a second) are optional.
+The value to this option must be the FITS keyword name that contains the
requested date, for example @option{--datetosec=DATE-OBS}.
@cindex GNU C Library
-This option can also interpret the older FITS date format
-(@code{DD/MM/YYThh:mm:ss.ddd...}) where only two characters are given to
-the year. In this case (following the GNU C Library), this option will make
-the following assumption: values 68 to 99 correspond to the years 1969 to
-1999, and values 0 to 68 as the years 2000 to 2068.
+This option can also interpret the older FITS date format
(@code{DD/MM/YYThh:mm:ss.ddd...}) where only two characters are given to the
year.
+In this case (following the GNU C Library), this option will make the
following assumption: values 68 to 99 correspond to the years 1969 to 1999, and
values 0 to 68 as the years 2000 to 2068.
-This is a very useful option for operations on the FITS date values, for
-example sorting FITS files by their dates, or finding the time difference
-between two FITS files. The advantage of working with the Unix epoch time
-is that you don't have to worry about calendar details (for example the
-number of days in different months, or leap years, and etc).
+This is a very useful option for operations on the FITS date values, for
example sorting FITS files by their dates, or finding the time difference
between two FITS files.
+The advantage of working with the Unix epoch time is that you don't have to
worry about calendar details (for example the number of days in different
months, or leap years, etc).
@end table
@@ -10814,63 +8035,38 @@ number of days in different months, or leap years, and
etc).
@section Sort FITS files by night
@cindex Calendar
-FITS images usually contain (several) keywords for preserving important
-dates. In particular, for lower-level data, this is usually the observation
-date and time (for example, stored in the @code{DATE-OBS} keyword
-value). When analyzing observed datasets, many calibration steps (like the
-dark, bias or flat-field), are commonly calculated on a per-observing-night
-basis.
-
-However, the FITS standard's date format (@code{YYYY-MM-DDThh:mm:ss.ddd})
-is based on the western (Gregorian) calendar. Dates that are stored in this
-format are complicated for automatic processing: a night starts in the
-final hours of one calendar day, and extends to the early hours of the next
-calendar day. As a result, to identify datasets from one night, we commonly
-need to search for two dates. However calendar peculiarities can make this
-identification very difficult. For example when an observation is done on
-the night separating two months (like the night starting on March 31st and
-going into April 1st), or two years (like the night starting on December
-31st 2018 and going into January 1st, 2019). To account for such
-situations, it is necessary to keep track of how many days are in a month,
-and leap years, and etc.
+FITS images usually contain (several) keywords for preserving important dates.
+In particular, for lower-level data, this is usually the observation date and
time (for example, stored in the @code{DATE-OBS} keyword value).
+When analyzing observed datasets, many calibration steps (like the dark, bias
or flat-field), are commonly calculated on a per-observing-night basis.
+
+However, the FITS standard's date format (@code{YYYY-MM-DDThh:mm:ss.ddd}) is
based on the western (Gregorian) calendar.
+Dates that are stored in this format are complicated for automatic processing:
a night starts in the final hours of one calendar day, and extends to the early
hours of the next calendar day.
+As a result, to identify datasets from one night, we commonly need to search
for two dates.
+However calendar peculiarities can make this identification very difficult.
+For example when an observation is done on the night separating two months
(like the night starting on March 31st and going into April 1st), or two years
(like the night starting on December 31st 2018 and going into January 1st,
2019).
+To account for such situations, it is necessary to keep track of how many days
are in a month, and leap years, etc.
@cindex Unix epoch time
@cindex Time, Unix epoch
@cindex Epoch, Unix time
-Gnuastro's @file{astscript-sort-by-night} script is created to help in such
-important scenarios. It uses @ref{Fits} to convert the FITS date format
-into the Unix epoch time (number of seconds since 00:00:00 of January 1st,
-1970), using the @option{--datetosec} option. The Unix epoch time is a
-single number (integer, if not given in sub-second precision), enabling
-easy comparison and sorting of dates after January 1st, 1970.
+Gnuastro's @file{astscript-sort-by-night} script is created to help in such
important scenarios.
+It uses @ref{Fits} to convert the FITS date format into the Unix epoch time
(number of seconds since 00:00:00 of January 1st, 1970), using the
@option{--datetosec} option.
+The Unix epoch time is a single number (integer, if not given in sub-second
precision), enabling easy comparison and sorting of dates after January 1st,
1970.
-You can use this script as a basis for making a much more highly customized
-sorting script. Here are some examples
+You can use this script as a basis for making a much more highly customized
sorting script.
+Here are some examples
@itemize
@item
-If you need to copy the files, but only need a single extension (not the
-whole file), you can add a step just before the making of the symbolic
-links, or copies, and change it to only copy a certain extension of the
-FITS file using the Fits program's @option{--copy} option, see @ref{HDU
-manipulation}.
+If you need to copy the files, but only need a single extension (not the whole
file), you can add a step just before the making of the symbolic links, or
copies, and change it to only copy a certain extension of the FITS file using
the Fits program's @option{--copy} option, see @ref{HDU manipulation}.
@item
-If you need to classify the files with finer detail (for example the
-purpose of the dataset), you can add a step just before the making of the
-symbolic links, or copies, to specify a file-name prefix based on other
-certain keyword values in the files. For example when the FITS files have a
-keyword to specify if the dataset is a science, bias, or flat-field
-image. You can read it and to add a @code{sci-}, @code{bias-}, or
-@code{flat-} to the created file (after the @option{--prefix})
-automatically.
-
-For example, let's assume the observing mode is stored in the hypothetical
-@code{MODE} keyword, which can have three values of @code{BIAS-IMAGE},
-@code{SCIENCE-IMAGE} and @code{FLAT-EXP}. With the step below, you can
-generate a mode-prefix, and add it to the generated link/copy names (just
-correct the filename and extension of the first line to the script's
-variables):
+If you need to classify the files with finer detail (for example the purpose
of the dataset), you can add a step just before the making of the symbolic
links, or copies, to specify a file-name prefix based on other certain keyword
values in the files.
+For example when the FITS files have a keyword to specify if the dataset is a
science, bias, or flat-field image.
+You can read it and to add a @code{sci-}, @code{bias-}, or @code{flat-} to the
created file (after the @option{--prefix}) automatically.
+
+For example, let's assume the observing mode is stored in the hypothetical
@code{MODE} keyword, which can have three values of @code{BIAS-IMAGE},
@code{SCIENCE-IMAGE} and @code{FLAT-EXP}.
+With the step below, you can generate a mode-prefix, and add it to the
generated link/copy names (just correct the filename and extension of the first
line to the script's variables):
@example
modepref=$(astfits infile.fits -h1 \
@@ -10884,26 +8080,18 @@ modepref=$(astfits infile.fits -h1 \
@cindex GNU AWK
@cindex GNU Sed
-Here is a description of it. We first use @command{astfits} to print all
-the keywords in extension @code{1} of @file{infile.fits}. In the FITS
-standard, string values (that we are assuming here) are placed in single
-quotes (@key{'}) which are annoying in this context/use-case. Therefore, we
-pipe the output of @command{astfits} into @command{sed} to remove all such
-quotes (substituting them with a blank space). The result is then piped to
-AWK for giving us the final mode-prefix: with @code{$1=="MODE"}, we ask AWK
-to only consider the line where the first column is @code{MODE}. There is
-an equal sign between the key name and value, so the value is the third
-column (@code{$3} in AWK). We thus use a simple @code{if-else} structure to
-look into this value and print our custom prefix based on it. The output of
-AWK is then stored in the @code{modepref} shell variable which you can add
-to the link/copy name.
-
-With the solution above, the increment of the file counter for each night
-will be independent of the mode. If you want the counter to be
-mode-dependent, you can add a different counter for each mode and use that
-counter instead of the generic counter for each night (based on the value
-of @code{modepref}). But we'll leave the implementation of this step to you
-as an exercise.
+Here is a description of it.
+We first use @command{astfits} to print all the keywords in extension @code{1}
of @file{infile.fits}.
+In the FITS standard, string values (that we are assuming here) are placed in
single quotes (@key{'}) which are annoying in this context/use-case.
+Therefore, we pipe the output of @command{astfits} into @command{sed} to
remove all such quotes (substituting them with a blank space).
+The result is then piped to AWK for giving us the final mode-prefix: with
@code{$1=="MODE"}, we ask AWK to only consider the line where the first column
is @code{MODE}.
+There is an equal sign between the key name and value, so the value is the
third column (@code{$3} in AWK).
+We thus use a simple @code{if-else} structure to look into this value and
print our custom prefix based on it.
+The output of AWK is then stored in the @code{modepref} shell variable which
you can add to the link/copy name.
+
+With the solution above, the increment of the file counter for each night will
be independent of the mode.
+If you want the counter to be mode-dependent, you can add a different counter
for each mode and use that counter instead of the generic counter for each
night (based on the value of @code{modepref}).
+But we'll leave the implementation of this step to you as an exercise.
@end itemize
@@ -10914,10 +8102,9 @@ as an exercise.
@node Invoking astscript-sort-by-night, , Sort FITS files by night, Sort FITS
files by night
@subsection Invoking astscript-sort-by-night
-This installed script will read a FITS date formatted value from the given
-keyword, and classify the input FITS files into individual nights. For more
-on installed scripts please see (see @ref{Installed scripts}). This script
-can be used with the following general template:
+This installed script will read a FITS date formatted value from the given
keyword, and classify the input FITS files into individual nights.
+For more on installed scripts please see (see @ref{Installed scripts}).
+This script can be used with the following general template:
@example
$ astscript-sort-by-night [OPTION...] FITS-files
@@ -10934,25 +8121,15 @@ $ astscript-sort-by-night --key=DATE-OBS
/path/to/data/*.fits
$ astscript-sort-by-night --link --prefix=img- /path/to/data/*.fits
@end example
-This script will look into a HDU/extension (@option{--hdu}) for a keyword
-(@option{--key}) in the given FITS files and interpret the value as a
-date. The inputs will be separated by "night"s (9:00a.m to next day's
-8:59:59a.m, spanning two calendar days, exact hour can be set with
-@option{--hour}).
+This script will look into a HDU/extension (@option{--hdu}) for a keyword
(@option{--key}) in the given FITS files and interpret the value as a date.
+The inputs will be separated by "night"s (9:00a.m to next day's 8:59:59a.m,
spanning two calendar days, exact hour can be set with @option{--hour}).
-The default output is a list of all the input files along with the
-following two columns: night number and file number in that night (sorted
-by time). With @option{--link} a symbolic link will be made (one for each
-input) that contains the night number, and number of file in that night
-(sorted by time), see the description of @option{--link} for more. When
-@option{--copy} is used instead of a link, a copy of the inputs will be
-made instead of symbolic link.
+The default output is a list of all the input files along with the following
two columns: night number and file number in that night (sorted by time).
+With @option{--link} a symbolic link will be made (one for each input) that
contains the night number, and number of file in that night (sorted by time),
see the description of @option{--link} for more.
+When @option{--copy} is used instead of a link, a copy of the inputs will be
made instead of symbolic link.
-Below you can see one example where all the @file{target-*.fits} files in
-the @file{data} directory should be separated by observing night according
-to the @code{DATE-OBS} keyword value in their second extension (number
-@code{1}, recall that HDU counting starts from 0). You can see the output
-after the @code{ls} command.
+Below you can see one example where all the @file{target-*.fits} files in the
@file{data} directory should be separated by observing night according to the
@code{DATE-OBS} keyword value in their second extension (number @code{1},
recall that HDU counting starts from 0).
+You can see the output after the @code{ls} command.
@example
$ astscript-sort-by-night -pimg- -h1 -kDATE-OBS data/target-*.fits
@@ -10960,21 +8137,16 @@ $ ls
img-n1-1.fits img-n1-2.fits img-n2-1.fits ...
@end example
-The outputs can be placed in a different (already existing) directory by
-including that directory's name in the @option{--prefix} value, for example
-@option{--prefix=sorted/img-} will put them all under the @file{sorted}
-directory.
+The outputs can be placed in a different (already existing) directory by
including that directory's name in the @option{--prefix} value, for example
@option{--prefix=sorted/img-} will put them all under the @file{sorted}
directory.
-This script can be configured like all Gnuastro's programs (through
-command-line options, see @ref{Common options}), with some minor
-differences that are described in @ref{Installed scripts}. The particular
-options to this script are listed below:
+This script can be configured like all Gnuastro's programs (through
command-line options, see @ref{Common options}), with some minor differences
that are described in @ref{Installed scripts}.
+The particular options to this script are listed below:
@table @option
@item -h STR
@itemx --hdu=STR
-The HDU/extension to use in all the given FITS files. All of the given FITS
-files must have this extension.
+The HDU/extension to use in all the given FITS files.
+All of the given FITS files must have this extension.
@item -k STR
@itemx --key=STR
@@ -10982,37 +8154,31 @@ The keyword name that contains the FITS date format to
classify/sort by.
@item -H FLT
@itemx --hour=FLT
-The hour that defines the next ``night''. By default, all times before
-9:00a.m are considered to belong to the previous calendar night. If a
-sub-hour value is necessary, it should be given in units of hours, for
-example @option{--hour=9.5} corresponds to 9:30a.m.
+The hour that defines the next ``night''.
+By default, all times before 9:00a.m are considered to belong to the previous
calendar night.
+If a sub-hour value is necessary, it should be given in units of hours, for
example @option{--hour=9.5} corresponds to 9:30a.m.
@cartouche
@noindent
@cindex Time zone
@cindex UTC (Universal time coordinate)
@cindex Universal time coordinate (UTC)
-@strong{Dealing with time zones:} the time that is recorded in
-@option{--key} may be in UTC (Universal Time Coordinate). However, the
-organization of the images taken during the night depends on the local
-time. It is possible to take this into account by setting the
-@option{--hour} option to the local time in UTC.
-
-For example, consider a set of images taken in Auckland (New Zealand,
-UTC+12) during different nights. If you want to classify these images by
-night, you have to know at which time (in UTC time) the Sun rises (or any
-other separator/definition of a different night). In this particular
-example, you can use @option{--hour=21}. Because in Auckland, a night
-finishes (roughly) at the local time of 9:00, which corresponds to 21:00
-UTC.
+@strong{Dealing with time zones:}
+The time that is recorded in @option{--key} may be in UTC (Universal Time
Coordinate).
+However, the organization of the images taken during the night depends on the
local time.
+It is possible to take this into account by setting the @option{--hour} option
to the local time in UTC.
+
+For example, consider a set of images taken in Auckland (New Zealand, UTC+12)
during different nights.
+If you want to classify these images by night, you have to know at which time
(in UTC time) the Sun rises (or any other separator/definition of a different
night).
+In this particular example, you can use @option{--hour=21}.
+Because in Auckland, a night finishes (roughly) at the local time of 9:00,
which corresponds to 21:00 UTC.
@end cartouche
@item -l
@itemx --link
-Create a symbolic link for each input FITS file. This option cannot be used
-with @option{--copy}. The link will have a standard name in the following
-format (variable parts are written in @code{CAPITAL} letters and described
-after it):
+Create a symbolic link for each input FITS file.
+This option cannot be used with @option{--copy}.
+The link will have a standard name in the following format (variable parts are
written in @code{CAPITAL} letters and described after it):
@example
PnN-I.fits
@@ -11020,38 +8186,31 @@ PnN-I.fits
@table @code
@item P
-This is the value given to @option{--prefix}. By default, its value is
-@code{./} (to store the links in the directory this script was run in). See
-the description of @code{--prefix} for more.
+This is the value given to @option{--prefix}.
+By default, its value is @code{./} (to store the links in the directory this
script was run in).
+See the description of @code{--prefix} for more.
@item N
-This is the night-counter: starting from 1. @code{N} is just incremented by
-1 for the next night, no matter how many nights (without any dataset) there
-are between two subsequent observing nights (its just an identifier for
-each night which you can easily map to different calendar nights).
+This is the night-counter: starting from 1.
+@code{N} is just incremented by 1 for the next night, no matter how many
nights (without any dataset) there are between two subsequent observing nights
(its just an identifier for each night which you can easily map to different
calendar nights).
@item I
File counter in that night, sorted by time.
@end table
@item -c
@itemx --copy
-Make a copy of each input FITS file with the standard naming convention
-described in @option{--link}. With this option, instead of making a link, a
-copy is made. This option cannot be used with @option{--link}.
+Make a copy of each input FITS file with the standard naming convention
described in @option{--link}.
+With this option, instead of making a link, a copy is made.
+This option cannot be used with @option{--link}.
@item -p STR
@itemx --prefix=STR
-Prefix to append before the night-identifier of each newly created link or
-copy. This option is thus only relevant with the @option{--copy} or
-@option{--link} options. See the description of @option{--link} for how its
-used. For example, with @option{--prefix=img-}, all the created file names
-in the current directory will start with @code{img-}, making outputs like
-@file{img-n1-1.fits} or @file{img-n3-42.fits}.
-
-@option{--prefix} can also be used to store the links/copies in another
-directory relative to the directory this script is being run (it must
-already exist). For example @code{--prefix=/path/to/processing/img-} will
-put all the links/copies in the @file{/path/to/processing} directory, and
-the files (in that directory) will all start with @file{img-}.
+Prefix to append before the night-identifier of each newly created link or
copy.
+This option is thus only relevant with the @option{--copy} or @option{--link}
options.
+See the description of @option{--link} for how its used.
+For example, with @option{--prefix=img-}, all the created file names in the
current directory will start with @code{img-}, making outputs like
@file{img-n1-1.fits} or @file{img-n3-42.fits}.
+
+@option{--prefix} can also be used to store the links/copies in another
directory relative to the directory this script is being run (it must already
exist).
+For example @code{--prefix=/path/to/processing/img-} will put all the
links/copies in the @file{/path/to/processing} directory, and the files (in
that directory) will all start with @file{img-}.
@end table
@@ -11079,24 +8238,18 @@ the files (in that directory) will all start with
@file{img-}.
@cindex Image format conversion
@cindex Converting image formats
@pindex @r{ConvertType (}astconvertt@r{)}
-The FITS format used in astronomy was defined mainly for archiving,
-transmission, and processing. In other situations, the data might be useful
-in other formats. For example, when you are writing a paper or report, or
-if you are making slides for a talk, you can't use a FITS image. Other
-image formats should be used. In other cases you might want your pixel
-values in a table format as plain text for input to other programs that
-don't recognize FITS. ConvertType is created for such situations. The
-various types will increase with future updates and based on need.
-
-The conversion is not only one way (from FITS to other formats), but two
-ways (except the EPS and PDF formats@footnote{Because EPS and PDF are
-vector, not raster/pixelated formats}). So you can also convert a JPEG
-image or text file into a FITS image. Basically, other than EPS/PDF, you
-can use any of the recognized formats as different color channel inputs to
-get any of the recognized outputs. So before explaining the options and
-arguments (in @ref{Invoking astconvertt}), we'll start with a short
-description of the recognized files types in @ref{Recognized file formats},
-followed a short introduction to digital color in @ref{Color}.
+The FITS format used in astronomy was defined mainly for archiving,
transmission, and processing.
+In other situations, the data might be useful in other formats.
+For example, when you are writing a paper or report, or if you are making
slides for a talk, you can't use a FITS image.
+Other image formats should be used.
+In other cases you might want your pixel values in a table format as plain
text for input to other programs that don't recognize FITS.
+ConvertType is created for such situations.
+The various types will increase with future updates and based on need.
+
+The conversion is not only one way (from FITS to other formats), but two ways
(except the EPS and PDF formats@footnote{Because EPS and PDF are vector, not
raster/pixelated formats}).
+So you can also convert a JPEG image or text file into a FITS image.
+Basically, other than EPS/PDF, you can use any of the recognized formats as
different color channel inputs to get any of the recognized outputs.
+So before explaining the options and arguments (in @ref{Invoking
astconvertt}), we'll start with a short description of the recognized files
types in @ref{Recognized file formats}, followed a short introduction to
digital color in @ref{Color}.
@menu
* Recognized file formats:: Recognized file formats
@@ -11107,59 +8260,42 @@ followed a short introduction to digital color in
@ref{Color}.
@node Recognized file formats, Color, ConvertType, ConvertType
@subsection Recognized file formats
-The various standards and the file name extensions recognized by
-ConvertType are listed below. Currently Gnuastro uses the file name's
-suffix to identify the format.
+The various standards and the file name extensions recognized by ConvertType
are listed below.
+Currently Gnuastro uses the file name's suffix to identify the format.
@table @asis
@item FITS or IMH
@cindex IRAF
@cindex Astronomical data format
-Astronomical data are commonly stored in the FITS format (or the older data
-IRAF @file{.imh} format), a list of file name suffixes which indicate that
-the file is in this format is given in @ref{Arguments}.
+Astronomical data are commonly stored in the FITS format (or the older data
IRAF @file{.imh} format), a list of file name suffixes which indicate that the
file is in this format is given in @ref{Arguments}.
-Each image extension of a FITS file only has one value per
-pixel/element. Therefore, when used as input, each input FITS image
-contributes as one color channel. If you want multiple extensions in one
-FITS file for different color channels, you have to repeat the file name
-multiple times and use the @option{--hdu}, @option{--hdu2}, @option{--hdu3}
-or @option{--hdu4} options to specify the different extensions.
+Each image extension of a FITS file only has one value per pixel/element.
+Therefore, when used as input, each input FITS image contributes as one color
channel.
+If you want multiple extensions in one FITS file for different color channels,
you have to repeat the file name multiple times and use the @option{--hdu},
@option{--hdu2}, @option{--hdu3} or @option{--hdu4} options to specify the
different extensions.
@item JPEG
@cindex JPEG format
@cindex Raster graphics
@cindex Pixelated graphics
-The JPEG standard was created by the Joint photographic experts
-group. It is currently one of the most commonly used image
-formats. Its major advantage is the compression algorithm that is
-defined by the standard. Like the FITS standard, this is a raster
-graphics format, which means that it is pixelated.
-
-A JPEG file can have 1 (for gray-scale), 3 (for RGB) and 4 (for CMYK)
-color channels. If you only want to convert one JPEG image into other
-formats, there is no problem, however, if you want to use it in
-combination with other input files, make sure that the final number of
-color channels does not exceed four. If it does, then ConvertType will
-abort and notify you.
+The JPEG standard was created by the Joint photographic experts group.
+It is currently one of the most commonly used image formats.
+Its major advantage is the compression algorithm that is defined by the
standard.
+Like the FITS standard, this is a raster graphics format, which means that it
is pixelated.
+
+A JPEG file can have 1 (for gray-scale), 3 (for RGB) and 4 (for CMYK) color
channels.
+If you only want to convert one JPEG image into other formats, there is no
problem, however, if you want to use it in combination with other input files,
make sure that the final number of color channels does not exceed four.
+If it does, then ConvertType will abort and notify you.
@cindex Suffixes, JPEG images
-The file name endings that are recognized as a JPEG file for input
-are: @file{.jpg}, @file{.JPG}, @file{.jpeg}, @file{.JPEG},
-@file{.jpe}, @file{.jif}, @file{.jfif} and @file{.jfi}.
+The file name endings that are recognized as a JPEG file for input are:
+@file{.jpg}, @file{.JPG}, @file{.jpeg}, @file{.JPEG}, @file{.jpe},
@file{.jif}, @file{.jfif} and @file{.jfi}.
@item TIFF
@cindex TIFF format
-TIFF (or Tagged Image File Format) was originally designed as a common
-format for scanners in the early 90s and since then it has grown to become
-very general. In many aspects, the TIFF standard is similar to the FITS
-image standard: it can allow data of many types (see @ref{Numeric data
-types}), and also allows multiple images to be stored in a single file
-(each image in the file is called a `directory' in the TIFF
-standard). However, unlike FITS, it can only store images, it has no
-constructs for tables. Another (inconvenient) difference with the FITS
-standard is that keyword names are stored as numbers, not human-readable
-text.
+TIFF (or Tagged Image File Format) was originally designed as a common format
for scanners in the early 90s and since then it has grown to become very
general.
+In many aspects, the TIFF standard is similar to the FITS image standard: it
can allow data of many types (see @ref{Numeric data types}), and also allows
multiple images to be stored in a single file (each image in the file is called
a `directory' in the TIFF standard).
+However, unlike FITS, it can only store images, it has no constructs for
tables.
+Another (inconvenient) difference with the FITS standard is that keyword names
are stored as numbers, not human-readable text.
However, outside of astronomy, because of its support of different numeric
data types, many fields use TIFF images for accurate (for example 16-bit
@@ -11173,13 +8309,11 @@ writing TIFF images, please get in touch with us.
@cindex PostScript
@cindex Vector graphics
@cindex Encapsulated PostScript
-The Encapsulated PostScript (EPS) format is essentially a one page
-PostScript file which has a specified size. PostScript also includes
-non-image data, for example lines and texts. It is a fully functional
-programming language to describe a document. Therefore in ConvertType,
-EPS is only an output format and cannot be used as input. Contrary to
-the FITS or JPEG formats, PostScript is not a raster format, but is
-categorized as vector graphics.
+The Encapsulated PostScript (EPS) format is essentially a one page PostScript
file which has a specified size.
+PostScript also includes non-image data, for example lines and texts.
+It is a fully functional programming language to describe a document.
+Therefore in ConvertType, EPS is only an output format and cannot be used as
input.
+Contrary to the FITS or JPEG formats, PostScript is not a raster format, but
is categorized as vector graphics.
@cindex PDF
@cindex Adobe systems
@@ -11187,106 +8321,71 @@ categorized as vector graphics.
@cindex Compiled PostScript
@cindex Portable Document format
@cindex Static document description format
-The Portable Document Format (PDF) is currently the most common format
-for documents. Some believe that PDF has replaced PostScript and that
-PostScript is now obsolete. This view is wrong, a PostScript file is
-an actual plain text file that can be edited like any program source
-with any text editor. To be able to display its programmed content or
-print, it needs to pass through a processor or compiler. A PDF file
-can be thought of as the processed output of the compiler on an input
-PostScript file. PostScript, EPS and PDF were created and are
-registered by Adobe Systems.
+The Portable Document Format (PDF) is currently the most common format for
documents.
+Some believe that PDF has replaced PostScript and that PostScript is now
obsolete.
+This view is wrong, a PostScript file is an actual plain text file that can be
edited like any program source with any text editor.
+To be able to display its programmed content or print, it needs to pass
through a processor or compiler.
+A PDF file can be thought of as the processed output of the compiler on an
input PostScript file.
+PostScript, EPS and PDF were created and are registered by Adobe Systems.
@cindex @TeX{}
@cindex @LaTeX{}
-With these features in mind, you can see that when you are compiling a
-document with @TeX{} or @LaTeX{}, using an EPS file is much more low
-level than a JPEG and thus you have much greater control and therefore
-quality. Since it also includes vector graphic lines we also use such
-lines to make a thin border around the image to make its appearance in
-the document much better. No matter the resolution of the display or
-printer, these lines will always be clear and not pixelated. In the
-future, addition of text might be included (for example labels or
-object IDs) on the EPS output. However, this can be done better with
-tools within @TeX{} or @LaTeX{} such as
-PGF/Tikz@footnote{@url{http://sourceforge.net/projects/pgf/}}.
+With these features in mind, you can see that when you are compiling a
document with @TeX{} or @LaTeX{}, using an EPS file is much more low level than
a JPEG and thus you have much greater control and therefore quality.
+Since it also includes vector graphic lines we also use such lines to make a
thin border around the image to make its appearance in the document much better.
+No matter the resolution of the display or printer, these lines will always be
clear and not pixelated.
+In the future, addition of text might be included (for example labels or
object IDs) on the EPS output.
+However, this can be done better with tools within @TeX{} or @LaTeX{} such as
PGF/Tikz@footnote{@url{http://sourceforge.net/projects/pgf/}}.
@cindex Binary image
@cindex Saving binary image
@cindex Black and white image
-If the final input image (possibly after all operations on the flux
-explained below) is a binary image or only has two colors of black and
-white (in segmentation maps for example), then PostScript has another
-great advantage compared to other formats. It allows for 1 bit pixels
-(pixels with a value of 0 or 1), this can decrease the output file
-size by 8 times. So if a gray-scale image is binary, ConvertType will
-exploit this property in the EPS and PDF (see below) outputs.
+If the final input image (possibly after all operations on the flux explained
below) is a binary image or only has two colors of black and white (in
segmentation maps for example), then PostScript has another great advantage
compared to other formats.
+It allows for 1 bit pixels (pixels with a value of 0 or 1), this can decrease
the output file size by 8 times.
+So if a gray-scale image is binary, ConvertType will exploit this property in
the EPS and PDF (see below) outputs.
@cindex Suffixes, EPS format
-The standard formats for an EPS file are @file{.eps}, @file{.EPS},
-@file{.epsf} and @file{.epsi}. The EPS outputs of ConvertType have the
-@file{.eps} suffix.
+The standard formats for an EPS file are @file{.eps}, @file{.EPS},
@file{.epsf} and @file{.epsi}.
+The EPS outputs of ConvertType have the @file{.eps} suffix.
@item PDF
@cindex Suffixes, PDF format
@cindex GPL Ghostscript
-As explained above, a PDF document is a static document description
-format, viewing its result is therefore much faster and more efficient
-than PostScript. To create a PDF output, ConvertType will make a
-PostScript page description and convert that to PDF using GPL
-Ghostscript. The suffixes recognized for a PDF file are: @file{.pdf},
-@file{.PDF}. If GPL Ghostscript cannot be run on the PostScript file,
-it will remain and a warning will be printed.
+As explained above, a PDF document is a static document description format,
viewing its result is therefore much faster and more efficient than PostScript.
+To create a PDF output, ConvertType will make a PostScript page description
and convert that to PDF using GPL Ghostscript.
+The suffixes recognized for a PDF file are: @file{.pdf}, @file{.PDF}.
+If GPL Ghostscript cannot be run on the PostScript file, it will remain and a
warning will be printed.
@item @option{blank}
@cindex @file{blank} color channel
-This is not actually a file type! But can be used to fill one color
-channel with a blank value. If this argument is given for any color
-channel, that channel will not be used in the output.
+This is not actually a file type! But can be used to fill one color channel
with a blank value.
+If this argument is given for any color channel, that channel will not be used
in the output.
@item Plain text
@cindex Plain text
@cindex Suffixes, plain text
-Plain text files have the advantage that they can be viewed with any text
-editor or on the command-line. Most programs also support input as plain
-text files. As input, each plain text file is considered to contain one
-color channel.
-
-In ConvertType, the recognized extensions for plain text files are
-@file{.txt} and @file{.dat}. As described in @ref{Invoking astconvertt}, if
-you just give these extensions, (and not a full filename) as output, then
-automatic output will be preformed to determine the final output name (see
-@ref{Automatic output}). Besides these, when the format of a file cannot be
-recognized from its name, ConvertType will fall back to plain text mode. So
-you can use any name (even without an extension) for a plain text input or
-output. Just note that when the suffix is not recognized, automatic output
-will not be preformed.
-
-The basic input/output on plain text images is very similar to how tables
-are read/written as described in @ref{Gnuastro text table format}. Simply
-put, the restrictions are very loose, and there is a convention to define a
-name, units, data type (see @ref{Numeric data types}), and comments for the
-data in a commented line. The only difference is that as a table, a text
-file can contain many datasets (columns), but as a 2D image, it can only
-contain one dataset. As a result, only one information comment line is
-necessary for a 2D image, and instead of the starting `@code{# Column N}'
-(@code{N} is the column number), the information line for a 2D image must
-start with `@code{# Image 1}'. When ConvertType is asked to output to plain
-text file, this information comment line is written before the image pixel
-values.
+Plain text files have the advantage that they can be viewed with any text
editor or on the command-line.
+Most programs also support input as plain text files.
+As input, each plain text file is considered to contain one color channel.
-When converting an image to plain text, consider the fact that if the image
-is large, the number of columns in each line will become very large,
-possibly making it very hard to open in some text editors.
+In ConvertType, the recognized extensions for plain text files are @file{.txt}
and @file{.dat}.
+As described in @ref{Invoking astconvertt}, if you just give these extensions,
(and not a full filename) as output, then automatic output will be preformed to
determine the final output name (see @ref{Automatic output}).
+Besides these, when the format of a file cannot be recognized from its name,
ConvertType will fall back to plain text mode.
+So you can use any name (even without an extension) for a plain text input or
output.
+Just note that when the suffix is not recognized, automatic output will not be
preformed.
+
+The basic input/output on plain text images is very similar to how tables are
read/written as described in @ref{Gnuastro text table format}.
+Simply put, the restrictions are very loose, and there is a convention to
define a name, units, data type (see @ref{Numeric data types}), and comments
for the data in a commented line.
+The only difference is that as a table, a text file can contain many datasets
(columns), but as a 2D image, it can only contain one dataset.
+As a result, only one information comment line is necessary for a 2D image,
and instead of the starting `@code{# Column N}' (@code{N} is the column
number), the information line for a 2D image must start with `@code{# Image 1}'.
+When ConvertType is asked to output to plain text file, this information
comment line is written before the image pixel values.
+
+When converting an image to plain text, consider the fact that if the image is
large, the number of columns in each line will become very large, possibly
making it very hard to open in some text editors.
@item Standard output (command-line)
-This is very similar to the plain text output, but instead of creating a
-file to keep the printed values, they are printed on the command line. This
-can be very useful when you want to redirect the results directly to
-another program in one command with no intermediate file. The only
-difference is that only the pixel values are printed (with no information
-comment line). To print to the standard output, set the output name to
-`@file{stdout}'.
+This is very similar to the plain text output, but instead of creating a file
to keep the printed values, they are printed on the command line.
+This can be very useful when you want to redirect the results directly to
another program in one command with no intermediate file.
+The only difference is that only the pixel values are printed (with no
information comment line).
+To print to the standard output, set the output name to `@file{stdout}'.
@end table
@@ -11300,101 +8399,69 @@ comment line). To print to the standard output, set
the output name to
@cindex Pixels
@cindex Colorspace
@cindex Primary colors
-Color is defined by mixing various measurements/filters. In digital
-monitors or common digital cameras, colors are displayed/stored by mixing
-the three basic colors of red, green and blue (RGB) with various
-proportions. When printing on paper, standard printers use the cyan,
-magenta, yellow and key (CMYK, key=black) color space. In other words, for
-each displayed/printed pixel of a color image, the dataset/image has three
-or four values.
+Color is defined by mixing various measurements/filters.
+In digital monitors or common digital cameras, colors are displayed/stored by
mixing the three basic colors of red, green and blue (RGB) with various
proportions.
+When printing on paper, standard printers use the cyan, magenta, yellow and
key (CMYK, key=black) color space.
+In other words, for each displayed/printed pixel of a color image, the
dataset/image has three or four values.
@cindex Color channel
@cindex Channel, color
-To store/show the three values for each pixel, cameras and monitors
-allocate a certain fraction of each pixel's area to red, green and blue
-filters. These three filters are thus built into the hardware at the pixel
-level. However, because measurement accuracy is very important in
-scientific instruments, and we want to do measurements (take images) with
-various/custom filters (without having to order a new expensive detector!),
-scientific detectors use the full area of the pixel to store one value for
-it in a single/mono channel dataset. To make measurements in different
-filters, we just place a filter in the light path before the
-detector. Therefore, the FITS format that is used to store astronomical
-datasets is inherently a mono-channel format (see @ref{Recognized file
-formats} or @ref{Fits}).
-
-When a subject has been imaged in multiple filters, you can feed each
-different filter into the red, green and blue channels and obtain a colored
-visualization. In ConvertType, you can do this by giving each separate
-single-channel dataset (for example in the FITS image format) as an
-argument (in the proper order), then asking for the output in a format that
-supports multi-channel datasets (for example JPEG or PDF, see the examples
-in @ref{Invoking astconvertt}).
+To store/show the three values for each pixel, cameras and monitors allocate a
certain fraction of each pixel's area to red, green and blue filters.
+These three filters are thus built into the hardware at the pixel level.
+However, because measurement accuracy is very important in scientific
instruments, and we want to do measurements (take images) with various/custom
filters (without having to order a new expensive detector!), scientific
detectors use the full area of the pixel to store one value for it in a
single/mono channel dataset.
+To make measurements in different filters, we just place a filter in the light
path before the detector.
+Therefore, the FITS format that is used to store astronomical datasets is
inherently a mono-channel format (see @ref{Recognized file formats} or
@ref{Fits}).
+
+When a subject has been imaged in multiple filters, you can feed each
different filter into the red, green and blue channels and obtain a colored
visualization.
+In ConvertType, you can do this by giving each separate single-channel dataset
(for example in the FITS image format) as an argument (in the proper order),
then asking for the output in a format that supports multi-channel datasets
(for example JPEG or PDF, see the examples in @ref{Invoking astconvertt}).
@cindex Grayscale
@cindex Visualization
@cindex Colorspace, HSV
@cindex Colorspace, gray-scale
@cindex HSV: Hue Saturation Value
-As discussed above, color is not defined when a dataset/image contains a
-single value for each pixel. However, we interact with scientific datasets
-through monitors or printers (which allow multiple values per pixel and
-produce color with them). As a result, there is a lot of freedom in
-visualizing a single-channel dataset. The most basic is to use shades of
-black (because of its strong contrast with white). This scheme is called
-grayscale. To help in visualization, more complex mappings can be
-defined. For example, the values can be scaled to a range of 0 to 360 and
-used as the ``Hue'' term of the
-@url{https://en.wikipedia.org/wiki/HSL_and_HSV, Hue-Saturation-Value} (HSV)
-color space (while fixing the ``Saturation'' and ``Value'' terms). In
-ConvertType, you can use the @option{--colormap} option to choose between
-different mappings of mono-channel inputs, see @ref{Invoking astconvertt}.
-
-Since grayscale is a commonly used mapping of single-valued datasets, we'll
-continue with a closer look at how it is stored. One way to represent a
-gray-scale image in different color spaces is to use the same proportions
-of the primary colors in each pixel. This is the common way most FITS image
-viewers work: for each pixel, they fill all the channels with the single
-value. While this is necessary for displaying a dataset, there are
-downsides when storing/saving this type of grayscale visualization (for
-example in a paper).
+As discussed above, color is not defined when a dataset/image contains a
single value for each pixel.
+However, we interact with scientific datasets through monitors or printers
(which allow multiple values per pixel and produce color with them).
+As a result, there is a lot of freedom in visualizing a single-channel dataset.
+The most basic is to use shades of black (because of its strong contrast with
white).
+This scheme is called grayscale.
+To help in visualization, more complex mappings can be defined.
+For example, the values can be scaled to a range of 0 to 360 and used as the
``Hue'' term of the @url{https://en.wikipedia.org/wiki/HSL_and_HSV,
Hue-Saturation-Value} (HSV) color space (while fixing the ``Saturation'' and
``Value'' terms).
+In ConvertType, you can use the @option{--colormap} option to choose between
different mappings of mono-channel inputs, see @ref{Invoking astconvertt}.
+
+Since grayscale is a commonly used mapping of single-valued datasets, we'll
continue with a closer look at how it is stored.
+One way to represent a gray-scale image in different color spaces is to use
the same proportions of the primary colors in each pixel.
+This is the common way most FITS image viewers work: for each pixel, they fill
all the channels with the single value.
+While this is necessary for displaying a dataset, there are downsides when
storing/saving this type of grayscale visualization (for example in a paper).
@itemize
@item
-Three (for RGB) or four (for CMYK) values have to be stored for every
-pixel, this makes the output file very heavy (in terms of bytes).
+Three (for RGB) or four (for CMYK) values have to be stored for every pixel,
this makes the output file very heavy (in terms of bytes).
@item
-If printing, the printing errors of each color channel can make the
-printed image slightly more blurred than it actually is.
+If printing, the printing errors of each color channel can make the printed
image slightly more blurred than it actually is.
@end itemize
@cindex PNG standard
@cindex Single channel CMYK
-To solve both these problems when storing grayscale visualization, the best
-way is to save a single-channel dataset into the black channel of the CMYK
-color space. The JPEG standard is the only common standard that accepts
-CMYK color space.
-
-The JPEG and EPS standards set two sizes for the number of bits in each
-channel: 8-bit and 12-bit. The former is by far the most common and is what
-is used in ConvertType. Therefore, each channel should have values between
-0 to @math{2^8-1=255}. From this we see how each pixel in a gray-scale
-image is one byte (8 bits) long, in an RGB image, it is 3 bytes long and in
-CMYK it is 4 bytes long. But thanks to the JPEG compression algorithms,
-when all the pixels of one channel have the same value, that channel is
-compressed to one pixel. Therefore a Grayscale image and a CMYK image that
-has only the K-channel filled are approximately the same file size.
+To solve both these problems when storing grayscale visualization, the best
way is to save a single-channel dataset into the black channel of the CMYK
color space.
+The JPEG standard is the only common standard that accepts CMYK color space.
+
+The JPEG and EPS standards set two sizes for the number of bits in each
channel: 8-bit and 12-bit.
+The former is by far the most common and is what is used in ConvertType.
+Therefore, each channel should have values between 0 to @math{2^8-1=255}.
+From this we see how each pixel in a gray-scale image is one byte (8 bits)
long, in an RGB image, it is 3 bytes long and in CMYK it is 4 bytes long.
+But thanks to the JPEG compression algorithms, when all the pixels of one
channel have the same value, that channel is compressed to one pixel.
+Therefore a Grayscale image and a CMYK image that has only the K-channel
filled are approximately the same file size.
@node Invoking astconvertt, , Color, ConvertType
@subsection Invoking ConvertType
-ConvertType will convert any recognized input file type to any specified
-output type. The executable name is @file{astconvertt} with the following
-general template
+ConvertType will convert any recognized input file type to any specified
output type.
+The executable name is @file{astconvertt} with the following general template
@example
$ astconvertt [OPTION...] InputFile [InputFile2] ... [InputFile4]
@@ -11426,65 +8493,43 @@ $ cat 2darray.txt | astconvertt -oimg.fits
@end example
@noindent
-The output's file format will be interpreted from the value given to the
-@option{--output} option. It can either be given on the command-line or in
-any of the configuration files (see @ref{Configuration files}). Note that
-if the output suffix is not recognized, it will default to plain text
-format, see @ref{Recognized file formats}.
+The output's file format will be interpreted from the value given to the
@option{--output} option.
+It can either be given on the command-line or in any of the configuration
files (see @ref{Configuration files}).
+Note that if the output suffix is not recognized, it will default to plain
text format, see @ref{Recognized file formats}.
@cindex Standard input
-At most four input files (one for each color channel for formats that allow
-it) are allowed in ConvertType. The first input dataset can either be a
-file or come from Standard input (see @ref{Standard input}). The order of
-multiple input files is important. After reading the input file(s) the
-number of color channels in all the inputs will be used to define which
-color space to use for the outputs and how each color channel is
-interpreted.
-
-Some formats can allow more than one color channel (for example in the JPEG
-format, see @ref{Recognized file formats}). If there is one input dataset
-(color channel) the output will be gray-scale, if three input datasets
-(color channels) are given, they are respectively considered to be the red,
-green and blue color channels. Finally, if there are four color channels
-they will be be cyan, magenta, yellow and black (CMYK colors).
-
-The value to @option{--output} (or @option{-o}) can be either a full file
-name or just the suffix of the desired output format. In the former case,
-it will used for the output. In the latter case, the name of the output
-file will be set based on the automatic output guidelines, see
-@ref{Automatic output}. Note that the suffix name can optionally start a
-@file{.} (dot), so for example @option{--output=.jpg} and
-@option{--output=jpg} are equivalent. See @ref{Recognized file formats}
-
-Besides the common set of options explained in @ref{Common options},
-the options to ConvertType can be classified into input, output and
-flux related options. The majority of the options are to do with the
-flux range. Astronomical data usually have a very large dynamic range
-(difference between maximum and minimum value) and different subjects
-might be better demonstrated with a limited flux range.
+At most four input files (one for each color channel for formats that allow
it) are allowed in ConvertType.
+The first input dataset can either be a file or come from Standard input (see
@ref{Standard input}).
+The order of multiple input files is important.
+After reading the input file(s) the number of color channels in all the inputs
will be used to define which color space to use for the outputs and how each
color channel is interpreted.
+
+Some formats can allow more than one color channel (for example in the JPEG
format, see @ref{Recognized file formats}).
+If there is one input dataset (color channel) the output will be gray-scale,
if three input datasets (color channels) are given, they are respectively
considered to be the red, green and blue color channels.
+Finally, if there are four color channels they will be be cyan, magenta,
yellow and black (CMYK colors).
+
+The value to @option{--output} (or @option{-o}) can be either a full file name
or just the suffix of the desired output format.
+In the former case, it will used for the output.
+In the latter case, the name of the output file will be set based on the
automatic output guidelines, see @ref{Automatic output}.
+Note that the suffix name can optionally start a @file{.} (dot), so for
example @option{--output=.jpg} and @option{--output=jpg} are equivalent.
+See @ref{Recognized file formats}.
+
+Besides the common set of options explained in @ref{Common options}, the
options to ConvertType can be classified into input, output and flux related
options.
+The majority of the options are to do with the flux range.
+Astronomical data usually have a very large dynamic range (difference between
maximum and minimum value) and different subjects might be better demonstrated
with a limited flux range.
@noindent
Input:
@table @option
@item -h STR/INT
@itemx --hdu=STR/INT
-In ConvertType, it is possible to call the HDU option multiple times for
-the different input FITS or TIFF files in the same order that they are
-called on the command-line. Note that in the TIFF standard, one `directory'
-(similar to a FITS HDU) may contain multiple color channels (for example
-when the image is in RGB).
-
-Except for the fact that multiple calls are possible, this option is
-identical to the common @option{--hdu} in @ref{Input output options}. The
-number of calls to this option cannot be less than the number of input FITS
-or TIFF files, but if there are more, the extra HDUs will be ignored, note
-that they will be read in the order described in @ref{Configuration file
-precedence}.
-
-Unlike CFITSIO, libtiff (which is used to read TIFF files) only recognizes
-numbers (counting from zero, similar to CFITSIO) for `directory'
-identification. Hence the concept of names is not defined for the
-directories and the values to this option for TIFF files must be numbers.
+In ConvertType, it is possible to call the HDU option multiple times for the
different input FITS or TIFF files in the same order that they are called on
the command-line.
+Note that in the TIFF standard, one `directory' (similar to a FITS HDU) may
contain multiple color channels (for example when the image is in RGB).
+
+Except for the fact that multiple calls are possible, this option is identical
to the common @option{--hdu} in @ref{Input output options}.
+The number of calls to this option cannot be less than the number of input
FITS or TIFF files, but if there are more, the extra HDUs will be ignored, note
that they will be read in the order described in @ref{Configuration file
precedence}.
+
+Unlike CFITSIO, libtiff (which is used to read TIFF files) only recognizes
numbers (counting from zero, similar to CFITSIO) for `directory' identification.
+Hence the concept of names is not defined for the directories and the values
to this option for TIFF files must be numbers.
@end table
@noindent
@@ -11493,117 +8538,88 @@ Output:
@item -w FLT
@itemx --widthincm=FLT
-The width of the output in centimeters. This is only relevant for those
-formats that accept such a width (not plain text for example). For most
-digital purposes, the number of pixels is far more important than the value
-to this parameter because you can adjust the absolute width (in inches or
-centimeters) in your document preparation program.
+The width of the output in centimeters.
+This is only relevant for those formats that accept such a width (not plain
text for example).
+For most digital purposes, the number of pixels is far more important than the
value to this parameter because you can adjust the absolute width (in inches or
centimeters) in your document preparation program.
@item -b INT
@itemx --borderwidth=INT
@cindex Border on an image
-The width of the border to be put around the EPS and PDF outputs in units
-of PostScript points. There are 72 or 28.35 PostScript points in an inch or
-centimeter respectively. In other words, there are roughly 3 PostScript
-points in every millimeter. If you are planning on adding a border, its
-significance is highly correlated with the value you give to the
-@option{--widthincm} parameter.
-
-Unfortunately in the document structuring convention of the PostScript
-language, the ``bounding box'' has to be in units of PostScript points
-with no fractions allowed. So the border values only have to be
-specified in integers. To have a final border that is thinner than one
-PostScript point in your document, you can ask for a larger width in
-ConvertType and then scale down the output EPS or PDF file in your
-document preparation program. For example by setting @command{width}
-in your @command{includegraphics} command in @TeX{} or @LaTeX{}. Since
-it is vector graphics, the changes of size have no effect on the
-quality of your output quality (pixels don't get different values).
+The width of the border to be put around the EPS and PDF outputs in units of
PostScript points.
+There are 72 or 28.35 PostScript points in an inch or centimeter respectively.
+In other words, there are roughly 3 PostScript points in every millimeter.
+If you are planning on adding a border, its significance is highly correlated
with the value you give to the @option{--widthincm} parameter.
+
+Unfortunately in the document structuring convention of the PostScript
language, the ``bounding box'' has to be in units of PostScript points with no
fractions allowed.
+So the border values only have to be specified in integers.
+To have a final border that is thinner than one PostScript point in your
document, you can ask for a larger width in ConvertType and then scale down the
output EPS or PDF file in your document preparation program.
+For example by setting @command{width} in your @command{includegraphics}
command in @TeX{} or @LaTeX{}.
+Since it is vector graphics, the changes of size have no effect on the quality
of your output quality (pixels don't get different values).
@item -x
@itemx --hex
@cindex ASCII85 encoding
@cindex Hexadecimal encoding
-Use Hexadecimal encoding in creating EPS output. By default the ASCII85
-encoding is used which provides a much better compression ratio. When
-converted to PDF (or included in @TeX{} or @LaTeX{} which is finally saved
-as a PDF file), an efficient binary encoding is used which is far more
-efficient than both of them. The choice of EPS encoding will thus have no
-effect on the final PDF.
-
-So if you want to transfer your EPS files (for example if you want to
-submit your paper to arXiv or journals in PostScript), their storage
-might become important if you have large images or lots of small
-ones. By default ASCII85 encoding is used which offers a much better
-compression ratio (nearly 40 percent) compared to Hexadecimal encoding.
+Use Hexadecimal encoding in creating EPS output.
+By default the ASCII85 encoding is used which provides a much better
compression ratio.
+When converted to PDF (or included in @TeX{} or @LaTeX{} which is finally
saved as a PDF file), an efficient binary encoding is used which is far more
efficient than both of them.
+The choice of EPS encoding will thus have no effect on the final PDF.
+
+So if you want to transfer your EPS files (for example if you want to submit
your paper to arXiv or journals in PostScript), their storage might become
important if you have large images or lots of small ones.
+By default ASCII85 encoding is used which offers a much better compression
ratio (nearly 40 percent) compared to Hexadecimal encoding.
@item -u INT
@itemx --quality=INT
@cindex JPEG compression quality
@cindex Compression quality in JPEG
@cindex Quality of compression in JPEG
-The quality (compression) of the output JPEG file with values from 0 to 100
-(inclusive). For other formats the value to this option is ignored. Note
-that only in gray-scale (when one input color channel is given) will this
-actually be the exact quality (each pixel will correspond to one input
-value). If it is in color mode, some degradation will occur. While the JPEG
-standard does support loss-less graphics, it is not commonly supported.
+The quality (compression) of the output JPEG file with values from 0 to 100
(inclusive).
+For other formats the value to this option is ignored.
+Note that only in gray-scale (when one input color channel is given) will this
actually be the exact quality (each pixel will correspond to one input value).
+If it is in color mode, some degradation will occur.
+While the JPEG standard does support loss-less graphics, it is not commonly
supported.
@item --colormap=STR[,FLT,...]
-The color map to visualize a single channel. The first value given to this
-option is the name of the color map, which is shown below. Some color maps
-can be configured. In this case, the configuration parameters are
-optionally given as numbers following the name of the color map for example
-see @option{hsv}. The table below contains the usable names of the color
-maps that are currently supported:
+The color map to visualize a single channel.
+The first value given to this option is the name of the color map, which is
shown below.
+Some color maps can be configured.
+In this case, the configuration parameters are optionally given as numbers
following the name of the color map for example see @option{hsv}.
+The table below contains the usable names of the color maps that are currently
supported:
@table @option
@item gray
@itemx grey
@cindex Colorspace, gray-scale
-Grayscale color map. This color map doesn't have any parameters. The full
-dataset range will be scaled to 0 and @mymath{2^8-1=255} to be stored in
-the requested format.
+Grayscale color map.
+This color map doesn't have any parameters.
+The full dataset range will be scaled to 0 and @mymath{2^8-1=255} to be stored
in the requested format.
@item hsv
@cindex Colorspace, HSV
@cindex Hue, saturation, value
@cindex HSV: Hue Saturation Value
-Hue, Saturation,
-Value@footnote{@url{https://en.wikipedia.org/wiki/HSL_and_HSV}} color
-map. If no values are given after the name (@option{--colormap=hsv}), the
-dataset will be scaled to 0 and 360 for hue covering the full spectrum of
-colors. However, you can limit the range of hue (to show only a special
-color range) by explicitly requesting them after the name (for example
-@option{--colormap=hsv,20,240}).
-
-The mapping of a single-channel dataset to HSV is done through the Hue and
-Value elements: Lower dataset elements have lower ``value'' @emph{and}
-lower ``hue''. This creates darker colors for fainter parts, while also
-respecting the range of colors.
+Hue, Saturation,
Value@footnote{@url{https://en.wikipedia.org/wiki/HSL_and_HSV}} color map.
+If no values are given after the name (@option{--colormap=hsv}), the dataset
will be scaled to 0 and 360 for hue covering the full spectrum of colors.
+However, you can limit the range of hue (to show only a special color range)
by explicitly requesting them after the name (for example
@option{--colormap=hsv,20,240}).
+
+The mapping of a single-channel dataset to HSV is done through the Hue and
Value elements: Lower dataset elements have lower ``value'' @emph{and} lower
``hue''.
+This creates darker colors for fainter parts, while also respecting the range
of colors.
@item sls
@cindex DS9
@cindex SAO DS9
@cindex SLS Color
@cindex Colorspace: SLS
-The SLS color range, taken from the commonly used
-@url{http://ds9.si.edu,SAO DS9}. The advantage of this color range is that
-it ranges from black to dark blue, and finishes with red and white. So
-unlike the HSV color range, it includes black and white and brighter colors
-(like yellow, red and white) show the larger values.
+The SLS color range, taken from the commonly used @url{http://ds9.si.edu,SAO
DS9}.
+The advantage of this color range is that it ranges from black to dark blue,
and finishes with red and white.
+So unlike the HSV color range, it includes black and white and brighter colors
(like yellow, red and white) show the larger values.
@end table
@item --rgbtohsv
-When there are three input channels and the output is in the FITS format,
-interpret the three input channels as red, green and blue channels (RGB)
-and convert them to the hue, saturation, value (HSV) color space.
+When there are three input channels and the output is in the FITS format,
interpret the three input channels as red, green and blue channels (RGB) and
convert them to the hue, saturation, value (HSV) color space.
-The currently supported output formats of ConvertType don't have native
-support for HSV. Therefore this option is only supported when the output is
-in FITS format and each of the hue, saturation and value arrays can be
-saved as one FITS extension in the output for further analysis (for example
-to select a certain color).
+The currently supported output formats of ConvertType don't have native
support for HSV.
+Therefore this option is only supported when the output is in FITS format and
each of the hue, saturation and value arrays can be saved as one FITS extension
in the output for further analysis (for example to select a certain color).
@end table
@@ -11615,22 +8631,15 @@ Flux range:
@item -c STR
@itemx --change=STR
@cindex Change converted pixel values
-(@option{=STR}) Change pixel values with the following format
-@option{"from1:to1, from2:to2,..."}. This option is very useful in
-displaying labeled pixels (not actual data images which have noise)
-like segmentation maps. In labeled images, usually a group of pixels
-have a fixed integer value. With this option, you can manipulate the
-labels before the image is displayed to get a better output for print
-or to emphasize on a particular set of labels and ignore the rest. The
-labels in the images will be changed in the same order given. By
-default first the pixel values will be converted then the pixel values
-will be truncated (see @option{--fluxlow} and @option{--fluxhigh}).
-
-You can use any number for the values irrespective of your final
-output, your given values are stored and used in the double precision
-floating point format. So for example if your input image has labels
-from 1 to 20000 and you only want to display those with labels 957 and
-11342 then you can run ConvertType with these options:
+(@option{=STR}) Change pixel values with the following format
@option{"from1:to1, from2:to2,..."}.
+This option is very useful in displaying labeled pixels (not actual data
images which have noise) like segmentation maps.
+In labeled images, usually a group of pixels have a fixed integer value.
+With this option, you can manipulate the labels before the image is displayed
to get a better output for print or to emphasize on a particular set of labels
and ignore the rest.
+The labels in the images will be changed in the same order given.
+By default first the pixel values will be converted then the pixel values will
be truncated (see @option{--fluxlow} and @option{--fluxhigh}).
+
+You can use any number for the values irrespective of your final output, your
given values are stored and used in the double precision floating point format.
+So for example if your input image has labels from 1 to 20000 and you only
want to display those with labels 957 and 11342 then you can run ConvertType
with these options:
@example
$ astconvertt --change=957:50000,11342:50001 --fluxlow=5e4 \
@@ -11638,24 +8647,19 @@ $ astconvertt --change=957:50000,11342:50001
--fluxlow=5e4 \
@end example
@noindent
-While the output JPEG format is only 8 bit, this operation is done in
-an intermediate step which is stored in double precision floating
-point. The pixel values are converted to 8-bit after all operations on
-the input fluxes have been complete. By placing the value in double
-quotes you can use as many spaces as you like for better readability.
+While the output JPEG format is only 8 bit, this operation is done in an
intermediate step which is stored in double precision floating point.
+The pixel values are converted to 8-bit after all operations on the input
fluxes have been complete.
+By placing the value in double quotes you can use as many spaces as you like
for better readability.
@item -C
@itemx --changeaftertrunc
-Change pixel values (with @option{--change}) after truncation of the
-flux values, by default it is the opposite.
+Change pixel values (with @option{--change}) after truncation of the flux
values, by default it is the opposite.
@item -L FLT
@itemx --fluxlow=FLT
-The minimum flux (pixel value) to display in the output image, any pixel
-value below this value will be set to this value in the output. If the
-value to this option is the same as @option{--fluxhigh}, then no flux
-truncation will be applied. Note that when multiple channels are given,
-this value is used for all the color channels.
+The minimum flux (pixel value) to display in the output image, any pixel value
below this value will be set to this value in the output.
+If the value to this option is the same as @option{--fluxhigh}, then no flux
truncation will be applied.
+Note that when multiple channels are given, this value is used for all the
color channels.
@item -H FLT
@itemx --fluxhigh=FLT
@@ -11664,33 +8668,24 @@ The maximum flux (pixel value) to display in the
output image, see
@item -m INT
@itemx --maxbyte=INT
-This is only used for the JPEG and EPS output formats which have an 8-bit
-space for each channel of each pixel. The maximum value in each pixel can
-therefore be @mymath{2^8-1=255}. With this option you can change (decrease)
-the maximum value. By doing so you will decrease the dynamic range. It can
-be useful if you plan to use those values for other purposes.
+This is only used for the JPEG and EPS output formats which have an 8-bit
space for each channel of each pixel.
+The maximum value in each pixel can therefore be @mymath{2^8-1=255}.
+With this option you can change (decrease) the maximum value.
+By doing so you will decrease the dynamic range.
+It can be useful if you plan to use those values for other purposes.
@item -A INT
@itemx --forcemin=INT
-Enforce the value of @option{--fluxlow} (when its given), even if its
-smaller than the minimum of the dataset and the output is format supporting
-color. This is particularly useful when you are converting a number of
-images to a common image format like JPEG or PDF with a single command and
-want them all to have the same range of colors, independent of the contents
-of the dataset. Note that if the minimum value is smaller than
-@option{--fluxlow}, then this option is redundant.
+Enforce the value of @option{--fluxlow} (when its given), even if its smaller
than the minimum of the dataset and the output is format supporting color.
+This is particularly useful when you are converting a number of images to a
common image format like JPEG or PDF with a single command and want them all to
have the same range of colors, independent of the contents of the dataset.
+Note that if the minimum value is smaller than @option{--fluxlow}, then this
option is redundant.
@cindex PDF
@cindex EPS
@cindex PostScript
-By default, when the dataset only has two values, @emph{and} the output
-format is PDF or EPS, ConvertType will use the PostScript optimization that
-allows setting the pixel values per bit, not byte (@ref{Recognized file
-formats}). This can greatly help reduce the file size. However, when
-@option{--fluxlow} or @option{--fluxhigh} are called, this optimization is
-disabeled: even though there are only two values (is binary), the
-difference between them does not correspond to the full contrast of black
-and white.
+By default, when the dataset only has two values, @emph{and} the output format
is PDF or EPS, ConvertType will use the PostScript optimization that allows
setting the pixel values per bit, not byte (@ref{Recognized file formats}).
+This can greatly help reduce the file size.
+However, when @option{--fluxlow} or @option{--fluxhigh} are called, this
optimization is disabled: even though there are only two values (is binary),
the difference between them does not correspond to the full contrast of black
and white.
@item -B INT
@itemx --forcemax=INT
@@ -11699,67 +8694,43 @@ Similar to @option{--forcemin}, but for the maximum.
@item -i
@itemx --invert
-For 8-bit output types (JPEG, EPS, and PDF for example) the final value
-that is stored is inverted so white becomes black and vice versa. The
-reason for this is that astronomical images usually have a very large area
-of blank sky in them. The result will be that a large are of the image will
-be black. Note that this behavior is ideal for gray-scale images, if you
-want a color image, the colors are going to be mixed up.
+For 8-bit output types (JPEG, EPS, and PDF for example) the final value that
is stored is inverted so white becomes black and vice versa.
+The reason for this is that astronomical images usually have a very large area
of blank sky in them.
+The result will be that a large are of the image will be black.
+Note that this behavior is ideal for gray-scale images, if you want a color
image, the colors are going to be mixed up.
@end table
@node Table, , ConvertType, Data containers
@section Table
-Tables are the products of processing astronomical images and spectra. For
-example in Gnuastro, MakeCatalog will process the defined pixels over an
-object and produce a catalog (see @ref{MakeCatalog}). For each identified
-object, MakeCatalog can print its position on the image or sky, its total
-brightness and many other information that is deducible from the given
-image. Each one of these properties is a column in its output catalog (or
-table) and for each input object, we have a row.
-
-When there are only a small number of objects (rows) and not too many
-properties (columns), then a simple plain text file is mainly enough to
-store, transfer, or even use the produced data. However, to be more
-efficient in all these aspects, astronomers have defined the FITS binary
-table standard to store data in a binary (0 and 1) format, not plain
-text. This can offer major advantages in all those aspects: the file size
-will be greatly reduced and the reading and writing will be faster (because
-the RAM and CPU also work in binary).
-
-The FITS standard also defines a standard for ASCII tables, where the data
-are stored in the human readable ASCII format, but within the FITS file
-structure. These are mainly useful for keeping ASCII data along with images
-and possibly binary data as multiple (conceptually related) extensions
-within a FITS file. The acceptable table formats are fully described in
-@ref{Tables}.
+Tables are the products of processing astronomical images and spectra.
+For example in Gnuastro, MakeCatalog will process the defined pixels over an
object and produce a catalog (see @ref{MakeCatalog}).
+For each identified object, MakeCatalog can print its position on the image or
sky, its total brightness and many other information that is deducible from the
given image.
+Each one of these properties is a column in its output catalog (or table) and
for each input object, we have a row.
+
+When there are only a small number of objects (rows) and not too many
properties (columns), then a simple plain text file is mainly enough to store,
transfer, or even use the produced data.
+However, to be more efficient in all these aspects, astronomers have defined
the FITS binary table standard to store data in a binary (0 and 1) format, not
plain text.
+This can offer major advantages in all those aspects: the file size will be
greatly reduced and the reading and writing will be faster (because the RAM and
CPU also work in binary).
+
+The FITS standard also defines a standard for ASCII tables, where the data are
stored in the human readable ASCII format, but within the FITS file structure.
+These are mainly useful for keeping ASCII data along with images and possibly
binary data as multiple (conceptually related) extensions within a FITS file.
+The acceptable table formats are fully described in @ref{Tables}.
@cindex AWK
@cindex GNU AWK
-Binary tables are not easily readable by human eyes. There is no
-fixed/unified standard on how the zero and ones should be interpreted. The
-Unix-like operating systems have flourished because of a simple fact:
-communication between the various tools is based on human readable
-characters@footnote{In ``The art of Unix programming'', Eric Raymond makes
-this suggestion to programmers: ``When you feel the urge to design a
-complex binary file format, or a complex binary application protocol, it is
-generally wise to lie down until the feeling passes.''. This is a great
-book and strongly recommended, give it a look if you want to truly enjoy
-your work/life in this environment.}. So while the FITS table standards are
-very beneficial for the tools that recognize them, they are hard to use in
-the vast majority of available software. This creates limitations for their
-generic use.
-
-`Table' is Gnuastro's solution to this problem. With Table, FITS tables
-(ASCII or binary) are directly accessible to the Unix-like operating
-systems power-users (those working the command-line or shell, see
-@ref{Command-line interface}). With Table, a FITS table (in binary or ASCII
-formats) is only one command away from AWK (or any other tool you want to
-use). Just like a plain text file that you read with the @command{cat}
-command. You can pipe the output of Table into any other tool for
-higher-level processing, see the examples in @ref{Invoking asttable} for
-some simple examples.
+Binary tables are not easily readable by human eyes.
+There is no fixed/unified standard on how the zero and ones should be
interpreted.
+The Unix-like operating systems have flourished because of a simple fact:
communication between the various tools is based on human readable
characters@footnote{In ``The art of Unix programming'', Eric Raymond makes this
suggestion to programmers: ``When you feel the urge to design a complex binary
file format, or a complex binary application protocol, it is generally wise to
lie down until the feeling passes.''.
+This is a great book and strongly recommended, give it a look if you want to
truly enjoy your work/life in this environment.}.
+So while the FITS table standards are very beneficial for the tools that
recognize them, they are hard to use in the vast majority of available software.
+This creates limitations for their generic use.
+
+`Table'