Skip to main content
Topic: m/z value off by 0.1~0.2 Da (Read 5701 times) previous topic - next topic

m/z value off by 0.1~0.2 Da

In our lab, we are using WATERS QTOF. I recently found that the m/z values of the results from XCMS is off around 0.1-0.2 Da from those of Progenesis (software developed by WATERS). I believe the m/z values of Progenesis are right since those values agree with the raw data where I applied mass correction during data acquisition. I just wonder if anyone have similar problems before. Is there any parameters in XCMS related to mass correction? Any help will be appreciated.

Re: m/z value off by 0.1~0.2 Da

Reply #1
How did you convert the files? Proteowizards do not calibrate the data. You can use masswolf or data bridge. See here viewtopic.php?f=26&t=359&p=1694#p1694. I made a wrapper for masswolf that makes it possible to extract each "function" correctly.
Blog: stanstrup.github.io

Re: m/z value off by 0.1~0.2 Da

Reply #2
Quote from: "Jan Stanstrup"
How did you convert the files? Proteowizards do not calibrate the data. You can use masswolf or data bridge. See here viewtopic.php?f=26&t=359&p=1694#p1694. I made a wrapper for masswolf that makes it possible to extract each "function" correctly.

Hi Jan,

Thanks so much for your reply. The format of my acquired data is continuum. What I did was just upload the raw file directly to online XCMS since XCMS is compatible with .raw format. However, I found that the m/z values of the processed data were off from my raw data.

The next thing I tried was to convert the continnum file to centroid file before uploading to online XCMS. The processed m/z values were still off. Do I need to convert the .raw file to .CDF by databridge and use the local XCMS?

Re: m/z value off by 0.1~0.2 Da

Reply #3
How did you do the controiding yourself?
They probably use Proteowizard in XCMS online so I guess that is why the mass is off. I have no idea how continuous data is handled there. The way I would get correct files is do one of the following:
1) databridge followed by proteowizard with centroiding enabled
2) my masswolf wrapper followed by proteowizard with centroiding enabled

You can then use xcms online or anything else you want.
Blog: stanstrup.github.io

Re: m/z value off by 0.1~0.2 Da

Reply #4
Quote from: "Jan Stanstrup"
How did you do the controiding yourself?
They probably use Proteowizard in XCMS online so I guess that is why the mass is off. I have no idea how continuous data is handled there. The way I would get correct files is do one of the following:
1) databridge followed by proteowizard with centroiding enabled
2) my masswolf wrapper followed by proteowizard with centroiding enabled

You can then use xcms online or anything else you want.

Hi Jan, I tried the method you recommended. I kind of get stuck in proteowizard since proteowizard seemed unable to read the .CDF file generated from databridge. I will try the package you wrote to see if it works.

Re: m/z value off by 0.1~0.2 Da

Reply #5
Just got the message from the developer of ProteoWizard that lockmass correction function was very recently added to ProteoWizard.  Downloading the latest version, and using msconvert.exe with the new lockmassRefiner filter can solve this problem.

Re: m/z value off by 0.1~0.2 Da

Reply #6
That's very interesting and good news if there is finally a proper converter. Let us know if it works out and which version works.

- Jan
Blog: stanstrup.github.io

Re: m/z value off by 0.1~0.2 Da

Reply #7
Hi,

I have a similar problem when using LC-MS/MS orbi data. I have used MSConvert to export centroided values, but the output m/z from xcms are around 0.1m/z out.

I am trying to use XCMS to perform peak alignment of proteomics samples. I want to use this to assign peptide ID's in samples where no MS/MS was performed on the same parent.

It may be the way I am approaching the problem with xcms - I'm not really sure on how to use the package very well...! I have essentially followed the instructions in the vignette for LC-MS preprocessing and analysis.

Any help/suggestions would be appreciated!

Thanks
Harry

Re: m/z value off by 0.1~0.2 Da

Reply #8
I don't think it is an issue with the conversion of the files as most people suggest on here. We have tested XCMS and compared it to the output generated by Progenesis as well. I played with different settings in order to increase the number of features detected by it. It was always shocking to me that whereas Progenesis could detect tens of thousands of features in a given study, XCMS would come up with less than 10,000. The settings I changed were in the xcmsSet() function : narrower peakwidth() (depends on your LC system, UHPLC/HPLC) and lower ppm values in snthresh() (5 or less).  In addition to this setting, it was critical to set the grouping of features in group() to a VERY narrow mzwid window. Even playing around with these settings never got us to as many features as the ones detected by Progenesis and it yielded poor quantitation of low abundance features. The quantitation of robustly detected features ("nice peaks") was pretty comparable between both tools, but when features are low abundance they tend to get clustered together despite having clearly different m/z values (up to 1 amu apart!). In the output of XCMS you can see the m/z windows for given features, some are +/- 1 mu which depending on the mass of the compound could be several thousand ppms. This is an example output for one of the features that I picked randomly:

mzmed           mzmin           mzmax           rtmed           rtmin           rtmax
738.5467697   738.5036619   738.5628371   471.7307565   458.0702466   490.225902

Correct me if I am wrong, but I interpret this as the measurements for a feature that was measured in a bin of (- 58 ppm  to  + 21 ppm) around m/z 738.5467??
If you have any feedback or think our analysis of XCMS could be improved somehow, I would love to hear about what people do setting up the mz window in their runs.

Thanks!

Re: m/z value off by 0.1~0.2 Da

Reply #9
@hwhitwell Perhaps yoru version of proteowizard still doesn't do the conversion correctly? Have you tried my method using masswolf?

@metaboRap The best thing would be to post a reproducible example or at the very least the settings/script you used. Is the 738.5467 different from what the peak should be? Or just you find the range too large? Did you check your converted raw files? If this is related to the conversion problem you should see if the raw data has the right masses or not. The number of features can depends on a lot of parameters and is not a good measure of quality.
Blog: stanstrup.github.io