vgosDB creation
The IVS standard format for geodetic VLBI data is vgosDB, this information should help you in converting a directory full of fringe data into the vgosDB file format.
Prepare the fringe data
report_prep.sh
We need to first use the report_prep.sh script that is on oaf to generate some necessary files. This script relies on a particular directory structure and the presence of certain files to run correctly.
- The control file should be located in the control directory created previously and named cf_<4digit>.
- The skd file for the experiment should be downloaded and placed in the /data/AUSTRAL/ directory.
- The v2d and vex file should be in the /data/AUSTRAL/<experiment_name>/ directory
Once this is done you can run report_prep.sh in the four digit code directory where the fringe data is located. When it generates the ovex file you may need to edit the station definitions (under $SITE) to the appropriate 1 letter codes (Hb = L, Ht = g etc.).
Note: make sure you are not in the soft linked directory ~/data/AUSTRAL/… and instead in /data/AUSTRAL/ when you run report_prep.sh otherwise it does not properly grab the experiment code.
SNR ratios not in the correlator report
report_prep.sh will not add the SNR ratios info to the corr report for skd files with Flexbuff information defined. Copy the skd file to ops7, edit out Flexbuff references (replace with MARK5B where applicable), then run:
~/Make_apriori.sh <skd_file>
This will generate a file named <skd_file>_snr.apriori. Delete all the header information from this file, leaving the line starting with ‘name’ onwards. This file should then be transferred to the SNR directory for your experiment on oaf. Now you must manually run 2 commands on this file:
sed -i 's/Ht-Hb/Hb-Ht/g' aua077.skd_snr.apriori
/home/observer/HOPS/hops-3.17/postproc/snratio/snratio aua077.skd_snr.apriori alist.ed.out
This will populate the snr.out file, paste this files contents into the SNR section of the correlator report!
Creating the database
vgosDB are classified as different 'levels' depending on what processing has been completed on the data. Some analysis software, such as VieVS, will require you to generate a level 4 vgosDB, where others such as nuSolve will read in lower levels.
For IVS products, we only need to generate a level 1 database, as the analysts will complete the other levels themselves. However, for some experiments, such as those we are analysing in-house or sending to collaborators using VieVS, we will need to go up to level 4. It is good to generate a test database up to level 4 for IVS experiments anyway, as this lets us get an idea of the data quality for the session before making the final vgosDB publicly available.
Creating level 1 database
First step of generating the database is to create a level 1 database, this is done using vgosDbMake on ops7 and pointing it toward the fringe data on oaf.
vgosDbMake -d 21MAY14XT /mnt/oaf/AUSTRAL/si1134/1134
The above command will make the 21MAY14XT vgosDb directory at /data/vlbi/vgosDb/2021/
Creating level 2 & 3 database
Now navigate to the directory where the vgosDb was created (/data/vlbi/vgosDb/2021/). vgosDbCalc is used to create the level 2 database, and takes the name of the vgosDb directory as an argument.
vgosDbCalc 21MAY07XT
vgosDbProcLogs generates the level 3 database. Before running this we need to create a directory for our experiment on ops7 in /data/vlbi/sessions/ for our station logs. For example, for SI1134, we would create the directory /data/vlbi/sessions/2021/si1134 and transfer the station logs (named si1134yg.log etc.) to this directory. Once this is done execute the vgosDbProcLogs script with the vgosDb path as the only argument
vgosDbProcLogs 21MAY07XT
Creating level 4 database
For SI we need to process up to level 4, however, for regular IVS experiments we only technically need to go up to level 1. However, we should get up to this point and look around with nuSolve either way - which lets us make sure the correlator report is as accurate as possible before we create the ‘production’ level 1 database.
A fantastic user guide for nuSolve is available - either under the documents directory on ops7 or online.
Below are the rough steps needed to convert from level 3 to level 4 using nuSolve (these need to be looked at again)
- nuSolve on ops7
- By default show singleband
- Set Yg as reference clock - unless not participating or other issues. Reduce polynomial to 0.
- Process with just clock as LCL.
- Highlight high-scatter outliers, ctrl-X to remove them.
- Switch to group delay.
- Resolve ambiguity spacing with auto button - then reprocess.
- Add extra parameters - clock, zenith, atmosphere (or maybe station positions) as LCL and reprocess.
- Flag outliers, then apply Ionospheric corrections and reprocess.
- Add final parameters - PWL for clock, zenith, atmosphere, LCL for station positions, dUT1 rate, nutation angles, baseline clocks
- Reprocess
- Check w.rms
Ultimately out of this process we just want to identify any problems with the session so that we can update the correlator report anyway. The analysis centre does not rely on our level 2-4 database or flagging etc. The exception to this is for the SI experiments, where they need the level 4 database for analysis.
SI Processing
- Set singleband
- Set reference station clock and clock polynomial to 0
- Only select Lcl Clock as parameter
- Process
- Change to group → solve ambiguities → process
- Add in atmospheric parameters (zenith and atm gradients as Lcl)
- Process → IonoC → Process
- Then make Clocks/Zenith Pwl, atm gradients and dUT1 Lcl and process again.
Processing clock breaks
The user guide covers this in much greater detail
- Identify the station where a break has occurred.
- Plot only baselines where said station is ‘first station’
- Select right-most points from these baselines prior to the break - then press ctrl+b
- Then plot only baseline data where station is ‘second stations’
- Select right-most points from these baselines prior to the break - then press ctrl+b
- Select all baselines and re-process!