This thesis is focused on understanding collective discourse and employing its properties to build better decision support systems. We first define collective discourse as a collective human behavior in content generation. In social media, collective discourse is often a collective reaction to an event. A collective reaction to a well-defined subject emerges in response to an event (a movie release, a breaking story, a newly published paper) in the form of independent writings (movie reviews, news headlines, citation sentences) by many individuals. In order to understand collective discourse, we perform our analysis on a wide range of real-world datasets from citations to movie reviews. We show that all these datasets exhibit diversity of perspective, a property seen in other collective systems and a criterion in wise crowds. Our experiments also confirm that the network of different perspective co-occurrences exhibits the small-world property with high clustering of different perspectives. Finally, we show that non-expert contributions in collective discourse can be used to answer simple questions that are otherwise hard to answer. As a concrete example of collective discourse, we discuss citations to scholarly work. We show how they contain important information that convey the key features and basic underpinnings of a particular field, early and late developments, important contributions, and basic definitions and examples that enable rapid understanding of a field by non-experts. We then present C-LexRank, a system that exploits scientific collective discourse to produce automatically generated, readily consumable technical surveys. Finally, we further extend our experiments to summarize an entire scientific topic. We generate extractive surveys of a set of Question Answering (QA) and Dependency Parsing (DP) papers, their abstracts, and their citation sentences and show that citations have unique survey-worthy information.