Skip to main content

Messages

This section allows you to view all Messages made by this member. Note that you can only see Messages made in areas you currently have access to.

Messages - CoreyG

16
XCMS - FAQ / Re: XCMS - MS1 scans empty
I didn't have any readily available Agilent QqQ precursor ion scan data on hand, but could probably generate some next week if required.

Anyway, this is an old jank solution I was using to "convert" MS2 data to look like MS1 data:
Code: [Select]
## Load mzR library
library(mzR)

## Open the mzML file that you want to convert
dat<-openMSfile('myMSdata.mzML')

## Get spectra data
pks <- spectra(dat)
## Get file header
hdr <- header(dat)

## Remove all scans but ms2 scans
pks<-pks[hdr$msLevel==2]
hdr<-hdr[hdr$msLevel==2,]

## Take a quick look and make sure everything looks ok
head(hdr)

## Provide new scan/acquisition numbers
hdr$seqNum<-hdr$acquisitionNum<-seq(nrow(hdr))

## Clear all precursor charge/Intensity/MZ/ScanNum columns
## Not sure if this is required and how precursor ion scans will differ
hdr$precursorCharge<-hdr$precursorIntensity<-hdr$precursorMZ<-hdr$precursorScanNum<-0

## Overwrite msLevel to 'pretend' to be MS1 data
hdr$msLevel<-1

## Write out the new 'MS1' data
writeMSData(pks,'myAlteredMSdata.mzML',header=hdr)
There are a few fields that still contain MS2 information, so I'm not sure if they will conflict with anything downstream.
So, I'm not sure how well this will work for you. But let everyone know if it's helpful.

Good luck!
17
XCMS - FAQ / Re: XCMS - MS1 scans empty
Very cool resources, Jan.

AmelieV, if you only have one product ion per file, it's probably sufficient to "pretend" the MS2 data are MS1 scans.
Pretty sure I have a solution for that somewhere and some precursor ion scan data to test it on. I'll see if I can dig it up.

However, if you have already found the precursor masses, it's probably quickest to setup a Masshunter Quant or Qual method to integrate all the peaks (assuming this is what you are interested in?)
18
XCMS - FAQ / Re: XCMS - MS1 scans empty
Hi AmelieV,

You can find some quick notes and screenshots at the following sites: https://ccms-ucsd.github.io/GNPSDocumentation/fileconversion/ and https://xcmsonline.scripps.edu/docs/fileformats.html.

There are probably more detailed descriptions/tutorials around, including on this forum. However, I've found that XCMS is fairly robust to most (reasonable) settings in MSconvert.
I'd say the most important thing is to have 'peak picking' as a filter and 'MS levels' with a 1 in the first position (indicating you want MS1 centroided). Both of the links above show this.
The difference in settings between the links shouldn't affect XCMS function (mzXML vs mzML, 32 vs 64 bit, zlib compression).

Hope this can help you get things working.
19
R packages for metabolomics / Re: CliqueMS new R package for the annotation of adducts and fragments in LC-MS
Thanks for sharing, Osenan.

I've been using different packages for annotation of isotopes, adducts, dimers and fragments.
There are, of course, many ways that this can be done. From using just exact mass;  examination of co-elution peak shape; and correlations of peak areas.

I'd be interested in looking at how your network algorithm can improve our annotations.

Look forward to some interesting discussions in the future!

Cheers,
Corey
20
Mass spectrometers / Re: Suggestion to purchase of a high-resolution MS
Hi Sam,

I don't have a lot of HRMS experience, but I've gradually been doing more. Our focus is high-throughput, so robustness is something we care about as well. As a general rule, the top spec instruments tend to be less robust than the lower tier models, although there are always exceptions.

So far I've used an Agilent qTOF (6540) and two Thermo Orbitraps (Fusion Lumos and HF-X). All of these were setup to do analytical flow LC-MS.
The Agilent system had quite a few boards replaced over a few years, but I can't say that the instrument was looked after particularly well.
The Fusion Lumos ran well for over 1000 continuous samples (plasma). But other than that, I am not sure how robust the machine is outside of that.
The HF-X had a bad reputation for 'dirtying', but Thermo says they have fixed this in the current generation. If you are willing to wait a few months I can fill you in on how it goes :)

For the work that I was doing (lipidomics), the much higher resolution given by the orbitrap was very useful. Overall sensitivity was higher with the orbitraps as well - but we are comparing 2 very new instruments to a much older one.
However, both the orbitraps have very rough ion funnels. This causes in-source fragmentation of fragile molecules. This isn't exclusive to Thermo instruments.
Most vendors are aware of this, so you should bring it up with them if this is a concern.

Regarding quantitation, there are different levels of quantitation that people expect. Most metabolomics/lipidomics people have a relaxed view on quantitation. That is, there are caveats and assumptions that everyone excepts can't be resolved (not enough internal standards, shotgun vs LC...)
Most newer instruments provide a fairly decent dynamic range. Certainly much more than the expected variability you would see in a single metabolite in a group of people. So I guess it's important to ensure that sample prep is correct to put the right amount of 'stuff' in the machine.

I'm sure others can chime in with some more experiences and knowledge.

Cheers,
Corey
21
XCMS / Re: peak group detected with `peakDensityParam` but not listed in feature matrix
Hi cpataki,

My understanding of 'groupChromPeaks' is that is first generates of sequence of overlapping mass windows:
Code: [Select]
mass <- seq(peaks[1, "mz"], peaks[nrow(peaks), "mz"] + binSize, by = binSize / 2)
Then, it finds which peaks fall in each window and performs density based grouping on them (which is based on 'bw').

So, if you know that your instrument has very high accuracy/precision, you can reduce binsize to a smaller number. This should ensure peaks with relatively different masses aren't grouped together.
Thereafter, you can reduce 'bw' to fine tune based on RT.

You could get a rough visualization of the mass groupings with some basic plotting:
Code: [Select]
mz<-c(153.03,153.06)
rtr_1<-c(155, 175)
binSize<-0.01

## Get peaks that you want to find in a feature
specificPeaks<-chromPeaks(xdata,rt=rtr_1,mz=mzr)

## Plot the peak locations
plot(specificPeaks[,"rt"],specificPeaks[,"mz"])

## Create approximate grouping windows (not the values that 'groupChromPeaks' will generate)
mass<-seq(min(specificPeaks[, "mz"]), max(specificPeaks[, "mz"]) + binSize, by = binSize / 2)
abline(h=mass,col=rep(c('red','green')))
## Note these are overlapping windows. So look at groupings between the coloured lines
22
XCMS / Re: peak group detected with `peakDensityParam` but not listed in feature matrix
Glad you got it working, cpataki.

For future reference/readers, you can find the feature that contains your peaks through a brute force approach, such as:
Code: [Select]
## Save peak and feature definitions
Pks<-chromPeaks(xdata)
fDef<-featureDefinitions(xdata)

## Get peaks that you want to find in a feature
specificPeaks<-chromPeaks(xdata,rt=rtr_1,mz=mzr,type='all',msLevel=1L)

## Identify the row number of each peak from Pks
rows<-which(row.names(Pks)%in%row.names(specificPeaks))

## Search each feature definition peak list and identify the feature/s that contain every peak from above
fDef_row<-which(sapply(fDef$peakidx,function(x) all(rows%in%x)))

## Show the identified feature/s
fDef[fDef_row,]
23
XCMS / Re: peak group detected with `peakDensityParam` but not listed in feature matrix
Hi cpataki,

You can look at the peak table with the following command:
Quote
chromPeaks(xset5,rt=rtr_1,mz=mzr,type='all',msLevel=1L)

Both highlightChromPeaks and plotChromPeakDensity use this to find the peaks inside the ranges you specify.

What you should know, however, is that plotChromPeakDensity doesn't plot 'features'. It plots the density of the peaks found (see above) using the parameters you gave it (gparam). So, it is possible that 'groupChromPeaks' is including additional nearby (mz or rt) peaks, so that the median rt and mz are outside the range you specify in mzr and rtr_1.

So I would start by increasing the mzr range and trying to look again.

Cheers,
Corey
24
XCMS / Re: Side/ Partial Peak artifacts
Hi hhabra,

Could you provide a link to some of the forum discussions that you mention. I haven't seen them and would be interested in reading about what has been done before.

There are potentially a few places that you can try to limit these side/partial peaks. You seem to have tried a few approaches with some success.
I would suggest trying to alter the grouping algorithm settings. Increasing bw will eventually cause these to merge into the larger features, but this may have unwanted effects on other features. So I would look at changing the minfrac and minsamp settings.
As you are using fillPeaks, it will "find" the missing peaks in the rest of the samples. Looking at the images you posted, it doesn't look like XCMS will find the same side/partial peaks in many samples.

If that doesn't work, perhaps we can start to think of some post-processing solutions to handle it.
25
XCMS / Re: Scan numbering for DDA/IDA-experiments
Hi Tony,

While I have know the answers to your question, you could probably test it by filtering out the MS/MS data using MSConvert. Compare the original data to the filtered data and see if they are the same.

Alternatively, you could take a look at one of the files like this:
Code: [Select]
# Read data file
raw_data<-readMSData(files = files[1], pdata = new("NAnnotatedDataFrame", pd), msLevel. = 1)
 
# Extract out spectra data
spec<-spectra(raw_data)

# Iterate through the spectra list and obtain relevant information
specl<-lapply(spec,function(x) c(x@msLevel,x@rt,x@scanIndex))

# Compact list to matrix
specl<-do.call(rbind,specl)

This will create a matrix that contains the msLevel, retention time and scan number for every scan in the data. It shouldn't take long to cross reference a couple peaks to confirm whether 'scmin/scmax' need to be adjusted. If so, you have just generated a matrix that can be used to calculate the new scan counts!

Regarding lmin and lmax: I am not too sure about these. They can be found in the CentWave code, where they are found to be the minimum 'continuous wavelet transform' (CWT) coefficients on either side of a peak. Interestingly, it looks like rtmin/rtmax are defined using the same values.
I could be way off, but I think scmin/scmax are estimates of the peak bounds based on the CWT.

As it looks like 'into' is calculated using the bounds defined in lmin/lmax, perhaps these are the better columns for you to use. But do note that these values are not always referenced from scan 1.
26
Other / Re: Peak alignment with large dataset (over 2500 samples and growing)
On another matter, I’m wondering how to cut down on the number of features while still maintaining a low intensity threshold. Currently I have ~13,000 features. My goal is to be able to get my peak table into EZinfo, which is not able to handle my 1950x13,000 peak table. I am interested in minor compounds, so I don’t just want to filter by intensity. I have a few ideas, and I would love it if anyone could offer feedback.
Depending on what you are trying to do, you could get the median peak area from each of the triplicates. That will cut your sample rows in nearly a third.

Alternatively, you could filter the dataset to remove isotopes. Depending on the average number of carbons in your metabolites and signal/abundance, you might be able to reduce the dimensions 2-4 fold. The same can be done with removing adducts.
I've briefly played around with CAMERA for this, but ended up using mz.unity.

As a less useful suggestion, is it possible to use R for your data analysis?
For a lot of multivariate analysis, mixOmics does pretty well. The website has a lot of examples and the inbuilt plotting functions have come a long way.

One idea is to set strict filters during correspondence to cut down on the number of “false” features. I tried re-running XCMS on this same dataset but using different params. In particular, for correspondence, I changed the way sample groups are defined. Previously, all samples were in the same group. This time, I defined each sample as its own group (so ~600 groups). For PeakDensityParam, I set minFraction=2/3 & minSamples=2. My thinking was that a true feature would be present in all 3 injections of a sample, but I set the cutoff to 2 out of 3 to be on the safe side. In this way, I hoped to eliminate false features. At any rate, the correspondence step took much longer than before, and I ran out of memory before the script was completed. I tried a couple times with the same result.
My thoughts on this differ to many in the 'untargeted' scene. I'm really only interested in features that are present in nearly all samples (<10% missing values). So, I always ask if people expect to see features entirely missing from certain samples/groups.

The nice thing about XCMS is that you can set these parameters fairly loose early in the workflow. Then after fillChromPeaks, you can be more stringent.
With so many samples, I would imagine that seeing the same feature in multiple groups is an almost certainty. So maybe put every sample in 1 group, but set minFraction=10/600 (or something of that sort).

I'd love to hear other peoples thoughts on this, as well.

Another idea is to filter out less informative markers based on variation of each feature across my sample set. My idea is to calculate the coefficient of variation for each marker across the entire dataset, and then exclude any markers below a certain CV value. I understand that p value and fold-change are often used for this kind of filtering, but as I understand it, these only make sense if the dataset contains multiple experimental groups. I don’t have any groups in my dataset; this is just an exploratory analysis. Does anyone have knowledge of or experience with filtering in this way? Any papers that you can suggest? How to determine an appropriate cutoff value for CV?

Thanks!
This is certainly a way you could go.
Perhaps there is a way to empirically determine a good CV cutoff?
If CV is mostly related to biological grouping, then you could determine the difference in CV when all injections are used compared to when you have averaged the triplicates. Determine the threshold CV by permuting the biological grouping and repeating the process (you will end up averaging non-triplicates randomly). Whatever the 95th percentile is, that is your critical value.
27
Other / Re: Peak alignment with large dataset (over 2500 samples and growing)
Hello folks,

Here's an update from my end.

I returned form vacation to find CoreyG's helpful responses. It turns out that I was not using "value='into'". I changed this param, and now my data look much better.
Glad to hear I could be of help.

I've been using the Brunius batchCorr package, because I already know how to use R. However, given the characteristics of my dataset, I wonder if it is adequate.

Characteristics:
-- ~1950 files representing ~570 plant extracts (triplicate injection) + QC samples
-- 13 batches
-- All extracts are from the same species
-- The QC sample is an extract of about 40 accessions pooled together. However, it looks quantitatively different than most of the extracts in the sample set: the later eluting peaks of the QC sample are generally bigger while the early peaks are smaller. I don't think there are many qualitative differences between QC and other samples. However, I can imagine that these might translate into presence/absence differences in the peak table for minor compounds.
The differences between QCs and samples shouldn't be that big of a deal.
Depending on what batch correction method you use, you can assess the improvement in CV (RSD) of the QC features to determine how useful the batch correction was. Now, if the batch correction method optimized itself based on minimizing QC variation, then this approach is biased. Cross-validation should then be used to assess performance.

A simple visualization is to plot the pre-corrected CVs on the x-axis and the post-corrected CVs on the y-axis. Points that fall below the diagonal were improved; points on the diagonal weren't affected; points above the diagonal were negatively affected.
This may be an easy way to get a 'gut' feel for what method works best for you.

-- The extracts--other than QC--are not standardized by concentration or by equivalent weight of plant material. There is a range of weight of plant material that was extracted. Nonetheless, I do have for each sample the weight of plant material extracted and the weight of solvent used for extraction. From these values, I have generated a sample:solvent correction factor.
-- This is a pilot dataset and was not intended for publication.

My thinking is, now that the batch correction has been done, the next step is to apply the sample:solvent correction factor. The simplest thing to do would be, for each feature in a sample, divide the peak area value by the correction factor for that sample. However, I realize that detector response may not be linear in the range of interest for each feature; thus, the results may not be completely accurate. Nonetheless, I can't think of a better option. Any feedback on my approach?

This is a fairly common approach. Of course, you should always try to keep the sample:solvent ratio as consistent across all samples as possible. Remember that different sample:solvent ratios will cause variability in extraction efficiency, ionization and detector response.

If you are concerned about introducing associations into your data, consider using a linear model to remove the correction factor.
Get the residuals from lm(peakArea~correctionFactor). This allows the detector response to not be 1:1, but doesn't do much for non-linearity.
29
XCMS / Re: findChromPeaks-centWave {xcms} - Prefilter Settings and Code Documentation
Hi Tony,

Yes, I do believe you are correct with your interpretation of the documentation.
It looks like the prefilter check is done here: do_findChromPeaks-functions.R. The vapply iterates over each ROI index list and counts the number of scans with intensity above the prefilter threshold (I). If this is above 'k', the ROI is kept.

Also, 'scans' would probably be a better word to use  :)
30
XCMS / Re: Question regarding the plot-methods {MSnbase}, type = "XIC"
Hi Tony,

There are potentially a few different ways of getting around this kind of problem. But I can't quite think of anything elegant...
At first, I was hoping that you could pass 'panel.last=c(abline(v = 181.0, col = "red", lty = 2),abline(h = 106.05, col = "red", lty = 2))' as another parameter, but that doesn't work.

As a pretty poor work around, I've pulled out the code from the MSnbase github page and edited the plotting functions.
Code: [Select]
plotXIC_MSnExp_ex <- function(x, ...) {
  ## Restrict to MS level 1
  x <- filterMsLevel(x, 1L)
  if (!length(x))
    stop("No MS1 data available")
  fns <- basename(fileNames(x))
  if (isMSnbaseVerbose())
    message("Retrieving data ...", appendLF = FALSE)
  x <- as(x, "data.frame")
  x <- split(x, x$file)
  if (isMSnbaseVerbose())
    message("OK")
  ## Check if we are greedy and plot a too large area
  if (any(unlist(lapply(x, nrow)) > 20000))
    warning("The MS area to be plotted seems rather large. It is suggested",
            " to restrict the data first using 'filterRt' and 'filterMz'. ",
            "See also ?chromatogram and ?Chromatogram for more efficient ",
            "functions to plot a total ion chromatogram or base peak ",
            "chromatogram.",
            immediate = TRUE, call = FALSE)
  ## Define the layout.
  dots <- list(...)
  if (any(names(dots) == "layout")) {
    if (!is.null(dots$layout))
      layout(layout)
    dots$layout <- NULL
  } else
    layout(.vertical_sub_layout_ex(length(x)))
  tmp <- mapply(x, fns, FUN = function(z, main, ...) {
    .plotXIC_ex(x = z, main = main, layout = NULL, ...)
  }, MoreArgs = dots)
}


.plotXIC_ex <- function(x, main = "", col = "grey", colramp = topo.colors,
                     grid.color = "lightgrey", pch = 21,
                     layout = matrix(1:2, ncol = 1), ...) {
  # start edit
  plot_strip<-function(..., v, h) plot(...)
  dots <- list(...)
  # end edit
 
  print(list(...))
  if (is.matrix(layout))
    layout(layout)
  ## Chromatogram.
  bpi <- unlist(lapply(split(x$i, x$rt), max, na.rm = TRUE))
  brks <- lattice::do.breaks(range(x$i), nint = 256)
  par(mar = c(0, 4, 2, 1))
 
  # start edit
  plot_strip(as.numeric(names(bpi)), bpi, xaxt = "n", col = col, main = main,
       bg = lattice::level.colors(bpi, at = brks, col.regions = colramp), xlab = "",
       pch = pch, ylab = "", las = 2, ...)
  # end edit
 
  if(!is.null(dots$v)) abline(v=dots$v,col='red',lty=2)
 
  mtext(side = 4, line = 0, "Intensity", cex = par("cex.lab"))
  grid(col = grid.color)
  par(mar = c(3.5, 4, 0, 1))
 
  # start edit
  plot_strip(x$rt, x$mz, main = "", pch = pch, col = col, xlab = "", ylab = "",
       yaxt = "n", bg = lattice::level.colors(x$i, at = brks, col.regions = colramp),
       ...)
 
  if(!is.null(dots$h)) abline(h=dots$h,col='red',lty=2)
  if(!is.null(dots$v)) abline(v=dots$v,col='red',lty=2)
  # end edit
 
  axis(side = 2, las = 2)
  grid(col = grid.color)
  mtext(side = 1, line = 2.5, "Retention time", cex = par("cex.lab"))
  mtext(side = 4, line = 0, "m/z", cex = par("cex.lab"))
}

.vertical_sub_layout_ex <- function(x, sub_plot = 2) {
  sqrt_x <- sqrt(x)
  ncol <- ceiling(sqrt_x)
  nrow <- round(sqrt_x)
  rws <- split(1:(ncol * nrow * sub_plot), f = rep(1:nrow,
                                                   each = sub_plot * ncol))
  do.call(rbind, lapply(rws, matrix, ncol = ncol))
}

If you run these functions, then you can plot using 'plotXIC_MSnExp_ex(serine,v=181.0,h=106.05)'.
As you would expect, this isn't a great solution. But if it is particularly helpful, perhaps suggest it on the MSnBase github page and it might get implemented in a future release.

Of course, if anybody else has a better solution, please feel free to post it.

Cheers,
Corey