The June 2010 CACM has an interesting article by Jilin Chen and Joseph Konstan of the University of Minnesota on Conference Paper Selectivity and Impact. The abstract gets right to the point:
“Studying the metadata of the ACM Digital Library (http://www.acm.org/dl), we found that papers in low-acceptance-rate conferences have higher impact than those in high-acceptance-rate conferences within ACM, where impact is measured by the number of citations received. We also found that highly selective conferences — those that accept 30% or less of submissions—are cited at a rate comparable to or greater than ACM Transactions and journals.”
A key paragraph later in the paper has some more detail:
“Addressing the second question— on how much impact conference papers have compared to journal papers — in Figures 3 and 4, we found that overall, journals did not outperform conferences in terms of citation count; they were, in fact, similar to conferences with acceptance rates around 30%, far behind conferences with acceptance rates below 25% (T-test, T[7603] = 24.8, p< .001). Similarly, journals published as many papers receiving no citations in the next two years as conferences accepting 35%–40% of submissions, a much higher low-impact percentage than for highly selective conferences. The same analyses over four- and eight-year periods yielded results consistent with the two-year period; journal papers received significantly fewer citations than conferences where the acceptance rate was below 25%."
We have to assume that this study is only applicable to Computer Science, for which the ACM digital library is a very good sample, and not other disciplines (e.g., EE) or even narrow sub-disciplines within CS. Different disciplines have very different publication patterns. But it does confirm our own anecdotal evidence from tracking citations to papers written in our ebiquity lab over the past ten years — those published din top conferences tend to get more citations than those in journals.