Since it is my first time to use this forum, I was so excited when I notice your reply my message. Thank you so much for helping me solve the peoblem.
I will try to do it with your suggestions.
the *peak density* alignment method requires a certain number of features (AKA grouped peaks) across all samples to perform the alignment. Without knowing more about your data it is pretty hard to tell what the problem is. I'd suggest you redo the correspondence analysis (peak grouping) with less stringent settings and retry.
Secondly: I would suggest that you switch over to the *new* user interface and functions (see https://bioconductor.org/packages/release/bioc/vignettes/xcms/inst/doc/xcms.html for details). There you will e.g. also have the possibility to do the alignment on a subset of samples (e.g. if you have QC samples) or to exclude blank samples from the alignment (these in fact could cause the problem described above).
Anybody knows how to solve this problem with XC-MS? It shows:
> xset2 <- retcor(xset1, family= "s", plottype= "m", missing=1, extra=1,
Performing retention time correction using 2 peak groups.
Error in do_adjustRtime_peakGroups(peaks = peakmat, peakIndex = object@groupidx, :
Not enough peak groups even for linear smoothing available!
In addition: Warning message:
In do_adjustRtime_peakGroups(peaks = peakmat, peakIndex = object@groupidx, :
Too few peak groups for 'loess', reverting to linear method
I am not able to change the columns in the result table. I would like to view "MS/MS" or "Metlin_MS/MS" but the result table does not change if I select these columns as well. I also tried other Internet Browser (Firefox, Chrome...).
I can see that there would be MSMS results because there is a "Location of MS/MS scans" but I am not able to view it in the result table.
Does anyone know how to change that?
Thank you in advance for your answer.
I also have the same problem. Did you figure out how to do it?
I don't have a lot of HRMS experience, but I've gradually been doing more. Our focus is high-throughput, so robustness is something we care about as well. As a general rule, the top spec instruments tend to be less robust than the lower tier models, although there are always exceptions.
So far I've used an Agilent qTOF (6540) and two Thermo Orbitraps (Fusion Lumos and HF-X). All of these were setup to do analytical flow LC-MS.
The Agilent system had quite a few boards replaced over a few years, but I can't say that the instrument was looked after particularly well.
The Fusion Lumos ran well for over 1000 continuous samples (plasma). But other than that, I am not sure how robust the machine is outside of that.
The HF-X had a bad reputation for 'dirtying', but Thermo says they have fixed this in the current generation. If you are willing to wait a few months I can fill you in on how it goes
For the work that I was doing (lipidomics), the much higher resolution given by the orbitrap was very useful. Overall sensitivity was higher with the orbitraps as well - but we are comparing 2 very new instruments to a much older one.
However, both the orbitraps have very rough ion funnels. This causes in-source fragmentation of fragile molecules. This isn't exclusive to Thermo instruments.
Most vendors are aware of this, so you should bring it up with them if this is a concern.
Regarding quantitation, there are different levels of quantitation that people expect. Most metabolomics/lipidomics people have a relaxed view on quantitation. That is, there are caveats and assumptions that everyone excepts can't be resolved (not enough internal standards, shotgun vs LC...)
Most newer instruments provide a fairly decent dynamic range. Certainly much more than the expected variability you would see in a single metabolite in a group of people. So I guess it's important to ensure that sample prep is correct to put the right amount of 'stuff' in the machine.
I'm sure others can chime in with some more experiences and knowledge.
Some updates and a few more questions.
I have been using the 'IPO' package to determine my XCMS parameters. The resulting peak table contains over 50,000 features. I'm ok with this for now. Has anyone else used IPO? Thoughts?
I successfully ran CAMERA (thanks CoreyG for the suggestions) with the goal of finding redundant features (eg. adducts/isotopes). It identified~7,000 pc groups in the ~50,000 features. Now, if I am understanding correctly, I need to select one feature from each pc group to serve as the "representative" for that compound. Any recommendations on how to make this selection?