This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
Messages - Jan Stanstrup
Looks like it has found a feature with mz between 0 and 368... That sounds like a bug. There was some recent fixes to something similar. Perhaps try to update from github and post your version.
To save your whole workspace use save.image(). To save only the xcmsSet object use save or saveRDS.
Probably what you want though is a peak table. For that you use peakTable and save that table for example with write.csv.
If you want to do real quantification then you need an internal standard for each compound you want to measure.
A weaker approach would be external calibration curves with non-labelled standards. This would not take into account the matrix effect. So unless you can validate that this makes some sense with your particular matrix by comparing to an approach with labelled standards first I would not give any validity to such an approach.
For untargeted metabolomics people disagree heavily. Some use a single standard, some try to use a standard for each compound group or possibly group by retention time.
I have yet to see anyone show that internal standards help anything in untargeted studies...
EDIT: I should clarify that my comments regarding untargeted was for the idea that you can use internal standards to quantify in an untargeted setting and the idea that you can use internal standards to correct analytical drifts.
As @stacey.reinke and @romanas chaleckis pointed out they can however be very useful to check the system.
Perhaps a bad file? https://github.com/sneumann/xcms/issues/55
I think the parameters are named slightly differently in mzmine but here is a tutorial for how to estimate reasonable parameters in xcms: https://rawgit.com/stanstrup/XCMS-course/master/1.%20XCMS.html
The peaks for each sample are in @peaks with rtmin and rtmax (from memory). That should be the integration limits.
You can use MSnbase instead to plot chromatograms. That should be faster. Some possible inspiration here: https://cdn.rawgit.com/stanstrup/xcms_ggplot2/8ac45aeb/ggplot.Chromatograms.html
I will try to contact NIST and see what they can provide.
Does anyone know if there is a way to direct access to the data in the NIST library? I would like to automate some things in R but the NIST library seems to be in some binary format.
Have someone done something similar?
My thought was too that it had to be the different retcor. That is why I asked if there is the same issue after the first grouping.
I definitely don't see any need for metaXCMS if the samples were analyzed together.
As for stats I really urge you not to just use the stats in XCMS. You are missing things that you probably need such as:
* Drift correction
* Correction for multiple testing --> FDR.
* Statistical model that takes into consideration all the factors in your study.
I don't think there is anything *wrong* with doing C vs A and C vs B but at the very least you'd need FDR correction on the whole set of p-values.
So I'd advise investing some time into doing stats in R using lm or lmer (and/or something multivariate) depending on your study.
Rick Dunn talks about some of these things in the last talk of the Data processing workshop here (unfortunately the last part was cut):
I am interested in a similar issue.
For data analysis it would be nice to have blank samples in your dataset so that you can assess noise compared to sample values. But since there are no or few peaks, peak grouping and alignment has failed in my hands.
What would be nice to have is at least be able to add new samples and do a dump integration with fillPeaks on the new samples.
I have managed this by forcing the new files into the xcms object, faking scan times (raw/original and set the corrected RTs using the mean correction of the original set) but this is very dirty...
@johannes.rainer Any thoughts on whether something clean is feasible here?
Hmm. Not obvious to me what is going on then. What you see is true also before retcor after the first group?
If I understand correctly you have 3 groups: A, B, C.
1) If you do A and C you get features found in A+C
2) If you do B and C you get features found in B+C
3) If you do A, B and C you get features found in A+B+C.
So why should your not get different peaktables in those 3 cases?
Depends why it is an outlier. RT shifts or just very different intensities? If the first it could make sense. If not then no I would say. If no shift but unique features pruning those features from the peaktable might be fine.
I am not sure it is completely clear what you are comparing. Are you talking about processing with and without dividing the samples in groups? Or two completely separate processing of the two groups?
Very likely it is the grouping step. The settings there are per group so it matters if you process things as one group or not: https://rawgit.com/stanstrup/XCMS-course/master/1.%20XCMS.html#/30
To view raw files? No. To browse converted files you can use mzMine.
To do the centroiding? Yes, msconvert from Proteowizard can, but from the docs seems not well. msconvert cannot use the Waters' centroiding as it can for other vendor formats. So it has to use its own supposedly inferior implementation.