User Tools

Site Tools


operations:agenda2016_06_23

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
operations:agenda2016_06_23 [2016/06/24 03:24]
Arwin Kahlon
operations:agenda2016_06_23 [2016/06/24 04:27] (current)
Arwin Kahlon
Line 14: Line 14:
 **Issues taken from handover notes** **Issues taken from handover notes**
   * Problems with mk5ke during the t2111 experiment. Problem persisted through reboot and power cycles, also present when using jive5ab. recover did not correct the problem. **(persistent problem with the recorder at ke)**   * Problems with mk5ke during the t2111 experiment. Problem persisted through reboot and power cycles, also present when using jive5ab. recover did not correct the problem. **(persistent problem with the recorder at ke)**
 +          - Jamie'​s resposne "There were two problems with the Mark5 at Katherine. On Wednesday, the module that we were recording to became read-only, and all attempts to record to it failed with "​XLRappend"​ errors. We didn't have a second module available in mk5ke, but there was a second empty module available in mk5-2ke. So, I changed the /​usr2/​control/​mk5ad.ctl file to use mk5-2ke as the recorder for the remainder of the T2111 experiment - the mark5 problem this morning was caused by this. Mick had swapped modules and restarted the mark5 units so that we could use mk5ke for recording again, while mk5-2ke was conditioning another module. DIMino was not running on mk5-2ke, which lead to the mk5cn errors in the FS. I fixed it this morning by terminating the FS, editing /​usr2/​control/​mk5ad.ctl to use mk5ke again, and restarting the FS."
   * "​Massive delay difference, clkoff was very high (order of e-01), mk5 was about 0.5s out of sync, but reported a sync error of 0. Counter and fmset didn't fix it so restarted the DBBC. That fixed the delay but then the field system crashed and had to be restarted."​ AUG025, 14/06/2016. **(Not a problem, was resolved.)**   * "​Massive delay difference, clkoff was very high (order of e-01), mk5 was about 0.5s out of sync, but reported a sync error of 0. Counter and fmset didn't fix it so restarted the DBBC. That fixed the delay but then the field system crashed and had to be restarted."​ AUG025, 14/06/2016. **(Not a problem, was resolved.)**
   * Yg disk_pos loosing ~10 gb every 2 hours. AUG025, 14/06/2016. **(not sure why this was, escalate to Jamie)**   * Yg disk_pos loosing ~10 gb every 2 hours. AUG025, 14/06/2016. **(not sure why this was, escalate to Jamie)**
 +            - Jim's response "This may be an issue with the schedule being a little more optimistic on how much time is needed for pre-scan calibration (or similar), meaning recording always starts a little late. This is also a higher data rate experiment than our usual ones so you lose more data per unit time if things run late. I know that scheduling optimisation is something that Lucia, Davis et al have been working on for the AUG/AUA sessions. No doubt the logs will help them fine tune things some more."
 +
   * Any persistent issues with DBBC delays? r4743 09/06/2016 and r1743 07/06/2016. **(No issues since)**   * Any persistent issues with DBBC delays? r4743 09/06/2016 and r1743 07/06/2016. **(No issues since)**
-  * The following two autocorrelations,​ are they okay? **(Bands 1 and 14 at ke are dodgey, Jamie to fix. If these autocorrelations were persistent, ​DDBC reconfiguration would be required)**+  * The following two autocorrelations,​ are they okay? **(Bands 1 and 14 at ke are dodgey, Jamie to fix. If these autocorrelations were persistent, ​DBBC reconfiguration would be required)**
 {{: operations:​1506ut_yg.png?​200 |}}{{: operations:​1506ut_ke.png?​200 |}} {{: operations:​1506ut_yg.png?​200 |}}{{: operations:​1506ut_ke.png?​200 |}}
  
/home/www/auscope/opswiki/data/attic/operations/agenda2016_06_23.1466738682.txt.gz · Last modified: 2016/06/24 03:24 by Arwin Kahlon