*** Recommendation: Your overall rating. Accept if room (top 30% but not top 15%, borderline for INFOCOM) (3) *** Contributions: What are the major issues addressed in the paper? Do you consider them important? Comment on the degree of novelty, creativity, impact, and technical depth in the paper. The paper considers the important and timely problem of implementing fast counters efficiently. The proposed solution is creative, and is analyzed in about as much depth as can be expected in the given space. This paper would be of interest to anyone working in the area of measurement and Intrusion Detection. *** Strengths: What are the major reasons to accept the paper? [Be brief.] Interesting and counterintuitive proposed solution. Solid technical analysis (though I did not work through the maths in detail). Claims to be the first solution to the problem of maintaining counters that can be both estimated and updated per-packet. *** Weaknesses: What are the most important reasons NOT to accept the paper? [Be brief.] It is not at all clear to this reviewer how the proposed HAC scheme can work. *** Detailed Comments: Please provide detailed comments that will be helpful to the TPC for assessing the paper. Also provide feedback to the authors. I like the idea, and it certainly is counterintuitive to increment the SRAM part of the counter only when the DRAM part overflows! But the sticking point for this reviewer is mapping the specification "additional memory which can be accessed only occasionally, say once every 16 time units on average" and "increment ... every time unit with probability 1/16" onto the realities of DRAM. In particular, there is nothing to prevent the algorithm from needing to implement B[i] several time units in a row, and that just isn't possible with DRAM. In other words, is the difference between the average access rate and the potential instantaneous access rate that concerns me. There will be some probability that an increment to B[i] will be lost, and you have not, as far as I can tell, accounted for that possibility in your error calculations. (In fact you have not even explicitly admitted this possibility in the paper as far as I can see.) Yes, this probability can be made arbitrarily small by queueing, but what about the asynchrony introduced by such an approach, especially with "millions" of counters to keep track of? How implementable is this? End of section I.A: how in the world do you get $l(\mu)\in \{4,5,6\}$ from $\mu$ between 1/50 and 1/10, and $l(\mu) = 1/\mu$? A nit: what do the terms "active" and "passive" have to do with the ability to estimate the counter value without DRAM access? ===== Review ===== *** Recommendation: Your overall rating. Likely Reject (top 50% but not in top 30%, needs more work) (2) *** Contributions: What are the major issues addressed in the paper? Do you consider them important? Comment on the degree of novelty, creativity, impact, and technical depth in the paper. The paper presents two new algorithms for high-speed statistics counter architectures. The algorithms are useful in situations where the actual value of the statistics counter (or a close estimate of it) is required on a per packet basis. *** Strengths: What are the major reasons to accept the paper? [Be brief.] The algorithms presented are novel and are shown to work using both analysis and simulation. *** Weaknesses: What are the most important reasons NOT to accept the paper? [Be brief.] Some of the details of the algorithms and the choices of parameters to use in practical settings are not outlined clearly. *** Detailed Comments: Please provide detailed comments that will be helpful to the TPC for assessing the paper. Also provide feedback to the authors. What is the point of showing both the algorithms? Do the authors envision situations in which SAC is likely to be more useful than HAC? To me, this is a no brainer: the errors with SAC seem high enough to render it useless in almost all practical situations I can imagine. In Section VI.B, what are the experiments "counting"? Heavy hitters? What are the thresholds used? Most of the figures are impossible to read. Other than these issues, the paper is generally well-written and thorough. ===== Review ===== *** Recommendation: Your overall rating. Likely accept (top 15% but not top 10%, significant contribution) (4) *** Contributions: What are the major issues addressed in the paper? Do you consider them important? Comment on the degree of novelty, creativity, impact, and technical depth in the paper. The paper presents a method for storing counters in a small amount of memory by giving up the requirement that the counters are precisely accurate. Giving up this requirement, the authors present two clever sampling approaches to maintain approximate counters. *** Strengths: What are the major reasons to accept the paper? [Be brief.] The basic idea here seems really clever, and it s backed up by analysis and experimental data. *** Weaknesses: What are the most important reasons NOT to accept the paper? [Be brief.] I would have liked to see a fuller analysis of the cost of the new method, e.g. it involves extra computation over some previous techniques, and a comparison of this extra cost seems warranted. *** Detailed Comments: Please provide detailed comments that will be helpful to the TPC for assessing the paper. Also provide feedback to the authors. I have one serious comment which is that a martingale based approach might greatly simplify the proofs in the theorems. Other points 'much smaller than' can be better represented in latex using \ll When describing SAC, it would help the reader if the text was more directly related to the algorithm in Figure 1 (I know they are the same, but it could be more clearly written) A line in the proof of Thm 1 runs over the column edge. The fonts used in the figures are much too small. ===== TPC Review =====