This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
Messages - Jan Stanstrup
Would be possible with some hacking, yes. The CDF writer in XCMS is not compatible with any other software though. If you can use one of the XML based formats it is more feasible.
I think what you'd need to do is do peakpicking, feature grouping and retcor. Then use the results from retcor to modify and xmcsRAW object with the corrected retention times.
You will only change the scan times this way though. You won't get any fancy peak warping.
You might want to take a look at MSnbase for handling the raw data. The development version of XCMS uses MSnbase-based handling of the raw data.
Are you talking about adducts and fragments? XCMS does not attempt to give you one feature per compound but one feature per ion.
CAMERA would be the standard tool to try to group ions originating from the same compound. There is however no consensus on how to statistically treat/merge those groups.
Given that the peak picking went well and didn't merge several masses it shouldn't have, yes that is my experience. If you see big differences you know something is not going right.
4.9554 could be a Na+<-> NH4+ difference. https://github.com/stanstrup/commonMZ/blob/master/inst/extdata/adducts_fragments.tsv
26 I don't have a good idea for. What is it accurately?
Are you talking about which m/z value to use? The one from the peaktable or from the raw data?
In xcms the m/z for each feature in individual samples is the intensity weighted mean across the peak. Then when you group features across samples it uses the median m/z of those mean values.
Because of this averaging the m/z in your peaktable should have a bit better accuracy. That is under the assumption that the parameters were sane enough not to group things that are NOT the same compound. So using this value you should normally be able to restrict your m/z range more when you search.
In 15 min at 18:00 CET there is a webinar that I think covers exactly this: https://attendee.gotowebinar.com/register/564911869539283713
XCMS spits out "features" meaning all chromatographic peaks it can find for all masses (so think of it as XCMS doing extracted ion chromatograms for all possible masses and integrating all peaks it sees). Meaning that there might be many features in your list that come from the same compound. Some are pseudo moelcular ions, some are fragments, some are adducts, some are isotopes, some are noise, some are contaminants... You cannot even count on all compounds showing a pseudo molecular ion.
Probably the main challenge in metabolomics is exactly deciphering what is what. It is not a trivial task.
The typical first step to figuring what is what is to use the CAMERA package that try to group features according to which are likely to come from the same compound.
Looks like it has found a feature with mz between 0 and 368... That sounds like a bug. There was some recent fixes to something similar. Perhaps try to update from github and post your version.
To save your whole workspace use save.image(). To save only the xcmsSet object use save or saveRDS.
Probably what you want though is a peak table. For that you use peakTable and save that table for example with write.csv.
If you want to do real quantification then you need an internal standard for each compound you want to measure.
A weaker approach would be external calibration curves with non-labelled standards. This would not take into account the matrix effect. So unless you can validate that this makes some sense with your particular matrix by comparing to an approach with labelled standards first I would not give any validity to such an approach.
For untargeted metabolomics people disagree heavily. Some use a single standard, some try to use a standard for each compound group or possibly group by retention time.
I have yet to see anyone show that internal standards help anything in untargeted studies...
EDIT: I should clarify that my comments regarding untargeted was for the idea that you can use internal standards to quantify in an untargeted setting and the idea that you can use internal standards to correct analytical drifts.
As @stacey.reinke and @romanas chaleckis pointed out they can however be very useful to check the system.
Perhaps a bad file? https://github.com/sneumann/xcms/issues/55
I think the parameters are named slightly differently in mzmine but here is a tutorial for how to estimate reasonable parameters in xcms: https://rawgit.com/stanstrup/XCMS-course/master/1.%20XCMS.html
The peaks for each sample are in @peaks with rtmin and rtmax (from memory). That should be the integration limits.
You can use MSnbase instead to plot chromatograms. That should be faster. Some possible inspiration here: https://cdn.rawgit.com/stanstrup/xcms_ggplot2/8ac45aeb/ggplot.Chromatograms.html
I will try to contact NIST and see what they can provide.
Does anyone know if there is a way to direct access to the data in the NIST library? I would like to automate some things in R but the NIST library seems to be in some binary format.
Have someone done something similar?