Managing Gigabytes: Compressing and Indexing Documents and Images, Second Edition

Front Cover
Morgan Kaufmann, May 3, 1999 - Business & Economics - 519 pages

"This book is the Bible for anyone who needs to manage large data collections. It's required reading for our search gurus at Infoseek. The authors have done an outstanding job of incorporating and describing the most significant new research in information retrieval over the past five years into this second edition."
Steve Kirsch, Cofounder, Infoseek Corporation

"The new edition of Witten, Moffat, and Bell not only has newer and better text search algorithms but much material on image analysis and joint image/text processing. If you care about search engines, you need this book: it is the only one with full details of how they work. The book is both detailed and enjoyable; the authors have combined elegant writing with top-grade programming."
Michael Lesk, National Science Foundation

"The coverage of compression, file organizations, and indexing techniques for full text and document management systems is unsurpassed. Students, researchers, and practitioners will all benefit from reading this book."
Bruce Croft, Director, Center for Intelligent Information Retrieval at the University of Massachusetts

In this fully updated second edition of the highly acclaimed Managing Gigabytes, authors Witten, Moffat, and Bell continue to provide unparalleled coverage of state-of-the-art techniques for compressing and indexing data. Whatever your field, if you work with large quantities of information, this book is essential reading--an authoritative theoretical resource and a practical guide to meeting the toughest storage and access challenges. It covers the latest developments in compression and indexing and their application on the Web and in digital libraries. It also details dozens of powerful techniques supported by mg, the authors' own system for compressing, storing, and retrieving text, images, and textual images. mg's source code is freely available on the Web.

 

Contents

Text Compression
21
Other performance considerations
99
3
111
7
145
Minimal perfect hashing
161
Diskbased lexicon storage
169
Random access and fast lookup
176
Image Compression
263
Leftmargin search
361
From slope histogram to docstrum
367
nine
388
Choice of coder
394
Lengthlimited coding
401
5
421
The Information Explosion
431
5
442

Contextbased compression of bilevel images
273
Clairvoyant compression
279
7
303
seven
310
7
343
8
349
Mixed Text and Images
355
Guide to the NZDL
469
How the NZDL works
478
References
485
Index
507
About the Authors 519
Copyright

Other editions - View all

Common terms and phrases

About the author (1999)

Ian H. Witten is a professor of computer science at the University of Waikato in New Zealand. He directs the New Zealand Digital Library research project. His research interests include information retrieval, machine learning, text compression, and programming by demonstration. He received an MA in Mathematics from Cambridge University, England; an MSc in Computer Science from the University of Calgary, Canada; and a PhD in Electrical Engineering from Essex University, England. He is a fellow of the ACM and of the Royal Society of New Zealand. He has published widely on digital libraries, machine learning, text compression, hypertext, speech synthesis and signal processing, and computer typography. He has written several books, the latest being Managing Gigabytes (1999) and Data Mining (2000), both from Morgan Kaufmann.