Hello, I've been processing Agilent GCMS data (1 to 5 files, as .abf files), using retention time identification and alignment (i.e. no RI index files). In MS-DIAL versions up to and including 4.48 (I've tested multiple versions back to 4.24, i.e. 4.24, 4.36, 4.38, 4.48), the software behaves as expected, without any issues. However, in the current version (4.80), and all versions after 4.48 up to this (i.e. 4.60, 4.70, 4.80), I get the following errors:
1) After finishing Data Processing, MS-DIAL closes (crashes). The .mtd2 file is automatically saved before the crash, as MS-DIAL can then be re-started and the .mtd2 project file opened, with processed data spots showing as expected for individual files. This happens whether I process a single file, or multiple files as input.
2) After successfully processing multiple files (and re-starting MS-DIAL and loading the saved .mtd2 project file as outlined above), the spots from individual files can be viewed by double clicking on files in the File Navigator. However, after double-clicking the alignment file in Alignment Navigator, MS-DIAL crashes. So it is not possible to view alignment results. I can generate an alignment file in the new versions (e.g. 4.80), and open the saved .mtd2 file in versions <= 4.48 to view the alignment result, but not in any versions >= 4.60.
I have installed all MS-DIAL versions on Win 10 (build version 1909, Intel i5-7200U 2.50 GHz processor with 16 GB RAM).
It looks to me that the alignment results are being successfully generated in he latest MS-DIAL versions, but simply not openable in the GUI. Whatever caused this problem happened in the upgrade from version 4.48 to 4.60. So at the moment I am restricted to using 4.48 until this is resolved. Are there any user functionality differences between 4.48 and 4.80?
I've attached a screenshot of the .mtd2 file opened in 4.80, where I processed 5 .abf files. Double clicking on any of the alignmentResults will crash MS-DIAL. The screenshot looks identical in 4.48 (i.e. 4.80 generated .mtd2 file opened in 4.48) , and the alignmentResults all open with no issues.
findIsotopes() default intval = "maxo"; annotate() passes down default intval = "into" default mzabs = 0.01; annotate() passes down default mzabs = 0.015 default filter = TRUE; annotate() does not recognise "filter" as an argument so it cannot be changed from the default (e.g. to FALSE) in the annotate() pipeline
groupCorr() default cor_exp_thr = 0.75; annotate() does not recognise "cor_exp_thr" as an argument so it cannot be changed from the default in the annotate() pipeline
I had no end of trouble running a sequence of the core functions with the default values (xsAnnotate(), groupFWHM(), findIsotopes(), groupCorr(), findAdducts()) vs annotate() and getting different results because of these inconsistencies. Please could the defaults be harmonised, and the filter and cor_exp_thr arguments be made available to pass through in annotate()?
Also, I am having issues with the "filter" argument in findIsotopes. As far as I can tell, this traces to the relevant code snippet from the CAMERA:::findIsotopesPspec() function (embedded in findIsoptopes()):
I read this as the maximum calculated C13 isotope intensity is estimated by taking the total number of carbons in the compound * the natural abundance of the C13 isotope and assuming all C are C13 isotopes. This will be an overstimate as the numC will always be > the actual number of C as all the mass is not made up of C and not all the C are always C13. (e.g. for a C12 alkane, numC as calculated from the expected [M+H]+ = 14). This is fine and allows for a margin or error. However, in contrast to inten.max, inten.min is a rather hard cutoff as it assumes a minimum of n = one C13 (fine) but then sets the intesity to a value of the assumed natural abundance * the C12 intensity. Not so fine as any instrumental measurements that slightly underestimate the C13 isotope intensity (e.g. on my orbitrap where isotope ratio intensities can deviate +/- 20% from theoretical) will fail this minimum estimate. Why not do this instead:
Hi, We recently upgraded to a new remote linux box, so I took the opportunity to install R 2.15.0, and then my selected bioconductor packages, including xcms 1.32.0 via the biocLite() method. However, xcms failed to install, citing that mzR wasn't available. I at first thought this was because I forgot to re-install the netCDF library (the paths had changed to the new box), so I reinstalled netCDF and then tried xcms installation again - still fails. I'm not sure what to do next.
Of possible note: I have used the --prefix options to install R and netcdf to my user-defined locations; it's not possible to install either as root as I am using a networked shared box.
Some details: > sessionInfo() R version 2.15.0 (2012-03-30) Platform: x86_64-unknown-linux-gnu (64-bit)
The downloaded source packages are in ‘/tmp/RtmpcsZOLu/downloaded_packages’ Updating HTML index of packages in '.Library' Making packages.html ... done Warning messages: 1: In install.packages(pkgs = pkgs, lib = lib, repos = repos, ...) : installation of package ‘mzR’ had non-zero exit status 2: In install.packages(pkgs = pkgs, lib = lib, repos = repos, ...) : installation of package ‘xcms’ had non-zero exit status >
Hi All, This is really a stats question but reflects a situation I commonly come up against in interpreting xcms results, and I'm sure many forum members do too.
Scenario: I generate an xcmsSet with 100 samples, with 5 replicates over 20 classes generating 200 features. One class is a control class, and the other 19 are different treatments. I want to determine which features in each of the treatment classes are significantly different compared to the same features in the control class. What I ultimately want to do is produce a separate chart or table for each class where only features significantly different to the control class are listed. The scripting for this is straight-forward; how to apply the underlying stats is not.
A 2- class problem would be easy; I would do multiple t-tests on each feature followed by a Bonferroni or FDR adjustment on the list of pairwise p-values to determine significance for each feature. However, for the multiple-class scenario, how should p-value adjustment be carried out?
What I've been doing to date is an ANOVA on each feature, and if the ANOVA p value is < 0.05 performing a post-hoc test using the TukeyHSD procedure to produce class-pairwise adjusted p values to detemine which treatment classes are significantly differerent from the control class. In this case, there is no p-value adjustment for the ANOVA even though I am conducting multiple ANOVAs. I'm worried that although the Tukey's test corrects for family-wise error between classes, I am making no allowance for error-rate correction between features.
In my approach, should I adjust the ANOVA p values with a multiple-testing correction for features (e.g. Bonferroni, FDR, etc) first, and then only if the adjusted ANOVA p value is < 0.05 go onto a post-hoc Tukey test? Or is there a better rcommended approach?
Is there any intention to migrate archived and future posts from the existing xcms mailing list to here, so all past xcms mailing list content will be searchable through this forum? Also, will the existing xcms mailing list continue to be maintained and accessed separately from this forum? It doesn't make much sense to keep them separated...