I don't want to push my opinion too much about Cathy Davidson's grading experiments at Duke.  Not that I don't have opinions, it's just that I don't have any better answers than everyone who commented on the article - as grading is a puzzle that we all struggle with. What I'd like to add are 3 ways that technology and learning technologists can assist faculty who would like to experiment as Professor Davidson has done with finding more authentic and effective ways to use grading to promote learning.
1 - Partner with Your Learning Technologist: I'd encourage any faculty member who wants to experiment with non-traditional grading schemes to partner with your campus learning technologists and learning designers. This means planning to work with your learning technology team at least 12 weeks prior to the start of the course. I'm betting that at most institutions your learning technology team will be overjoyed to partner with you on a course design / re-design process. This process will involve developing a course around learning modules, in which each module contains specific learning outcomes and assignments to support your teaching and learning goals. This course development process and partnership with your learning technologist will allow options for assessment to surface. Graded, low-stakes formative assessments may be one method that your learning technologist helps you enact to both provide students with feedback, while also getting around some of the demotivating aspects of high-stakes grading that we all dislike. An innovative grading scheme embedded in a course with strong pedagogical design will have a much better chance of succeeding. Your grading experiment can then be scaled to other courses, as an assessment of the process can be built in from the beginning. I estimate about 50 hours of faculty time to develop a full new course, with 50 hours of learning technologist/design time. This is a serious commitment on the front-end, but will result in a much higher quality course and a stronger foundation for experimentation with more effective methods for assessment.
2 - Set-Up Discussion Boards in Your LMS for Peer Feedback: I'm not sure if Professor Davidson took advantage of her LMS' discussion board feature to manage the peer review process. I've found the discussion board is an excellent tool for students to hand-in their paper, and receive peer and faculty feedback. We always recommend that faculty model appropriate and effective critiques to the students, as learning to critique is an important skills. For instance, we recommend using a "sandwich" technique - where the review initially reviews the positive aspects, next gives the constructive criticism, and finally ends on a positive note. This technique allows the writer to absorb the critique without feeling defensive or de-valued. The discussion board allows all critiques and feedback to be public and transparent.
3 - Set-Up Journals in Your Course for One-on-One Feedback: The LMS blog tool (or even better a public blog tool) is a great way to increase the amount of writing and collaboration in a course. An extension to methodology I'd like to suggest is to use the private "journaling" feature that most LMS systems offer. Requiring each student to journal each week, often using an established rubric for reporting productivity and learning, is a wonderful way to insure that the student and professor are on the same page regarding assessment. The journal allows for a dialogue and communication around assessment, with the professor able to assess on a range of student inputs (paper, peer review, class discussions, collegiality), as opposed to giving grades on individual assignments. With a journaling system, the few truly highest contributors can emerge and be recognized, without the disincentives towards student creativity and risk that traditional grading can unwittingly cause.
I'm sure that you might have other ideas about how learning technologists and learning technology can contribute to experiments in grading. The key, I think, is to recognize that these sorts of experiments require considerable investments in time and effort, and inputs beyond those associated with traditional courses.