Djview4 For Mac

Oct 09, 2017 The djay Pro interface looks great on Mac and it has won an Apple Design Award for its turntable view which realistically recreates vinyl decks on Mac. There are two versions: djay 2 ($4.99) which is only for mobile DJs on iPad, iPhone or Android and djay Pro for ($49.99) which is the Mac version. @Alessiox no it is not necessary to reboot the mac - only need to start a new Terminal and run from there (or even on most cases resource /.bashprofile – mmmmmm Feb 8 '16 at 18:51 add a comment. Jul 06, 2018 It is designed to be Nov 11, 2016 DjView 4.9.5 is an improved version of DjView 4.9, adapted for Mac OS X 10.8.4 Files:. djview4-4.9.5-binary.zip - Binary (application '.app' file) DjVu can advantageously replace PDF, PS, TIFF, JPEG, and GIF for distributing scanned The new portable viewer djview4 is available as a separate package.

All the talks and posters on this page are provided in DjVu, PDF.
Some are also provided in ODP (Open Office's Open Document Format), and PPT (MS PowerPoint).
Caution: a few PDF files do not display correctly in ghostview.

We very strongly encourage interested readers to use the DjVuversions: they display instantly, load much faster, and have nocompatibility problems.

  • djview4: Free/Open Source DjVu viewer for Windows, Mac OS-X, and Linux.
  • DjVu plug-in, Windows: 'Official' DjVu plug-in for Windows (free).
  • DjVu plug-in, Mac: 'Official' DjVu plug-in for Mac (free).
  • DjVu.org: links to other DjVu viewers for Windows, Java, Sharp Zaurus, PalmOS, Symbian, Pocket PC, Be OS....
  • On Linux, DjVu is supportedby Evince (standard Gnome document viewer),and Okular (standard KDE document viewer).
  • On Windows, DjVu is supported by ACDSee and IrfanView.

A similar seminar was given at the Xerox Research Center Europein Grenoble the following day.

Title: Learning Feature Hierarchies for Vision

Abstract: Intelligent perceptual tasks such as vision andaudition require the construction of good internalrepresentations. Theoretical and empirical evidence suggest that theperceptual world is best represented by a multi-stage hierarchy inwhich features in successive stages are increasingly global,invariant, and abstract. An important challenge for Machine Learningis to devise 'deep learning' methods for multi-stage architecture thancan automatically learn good feature hierarchies from labeled andunlabeled data.

A class of such methods that combine unsupervised sparse coding, andsupervised refinement will be described. We demonstrate the use ofthese deep learning methods to train convolutional networks(ConvNets). ConvNets are biologically-inspired architecturesconsisting of multiple stages of filter banks, interspersed withnon-linear operations, and spatial pooling operations, analogous tothe simple cells and complex cells in the mammalian visual cortex.

A number of applications will be shown through videos and live demos,including a category-level object recognition system that can betrained on the fly, a pedestrian detector, and system that recognizeshuman activities in videos, and a trainable vision system for off-roadmobile robot navigation.

A new kind of 'dataflow' computer architecture, dubbed NeuFlow, wasdesigned to run these algorithms (and other vision and recognitionalgorithms) in real time on small, embeddable platforms. an FPGAimplementation of NeuFlow running various vision applications will beshown. An ASIC is being designed in collaboration with e-lab at Yale,which will be capable of 700 Giga-operations per second for less than3 Watts.

Title: Learning Feature Hierarchies for Vision

Abstract: Animals and humans autonomously learn to perceive andnavigate the world. What 'learning algorithm' does the cortex use toorganize itself? Could computers and robots learn to perceive the wayanimals do, by just observing the world and moving around it? Thisconstitutes a major challenge for machine learning and computervision.

The visual cortex uses a multi-stage hierarchy of representations,from pixels, to edges, motifs, parts, objects, and scenes. A newbranch of machine learning research, known as 'deep learning' isproducing new algorithms that can learn such multi-stage hierarchiesof representations from raw inputs. I will describe abiologically-inspired, trainable vision architecture calledconvolutional network. It consists of multiple stages of filter banks,non-linear operations, and spatial pooling operations, analogous tothe simple cells and complex cells in the mammalian visualcortex. Convolutional nets are first trained with unlabeled samplesusing a learning method based on sparse coding, and subsequentlyfine-tuned using labelled samples with a gradient-based supervisedlearning algorithm.

A number of applications will be shown through videos and live demos,including a category-level object recognition system that can betrained on the fly, a pedestrian detector, and system that recognizeshuman activities in videos, and a trainable vision system for off-roadmobile robot navigation. A very fast implementation of these systemson specialized hardware will be shown. It is based on a newprogrammable and reconfigurable 'dataflow' architecture dubbedNeuFlow.

  • Introduction [PDF]
  • Energy-Based Learning [PDF]
  • Multi-Stage Learning [PDF]
  • Convolutional Networks [PDF]
  • Unsupervised Deep Learning [PDF]

Keynote talk given at the International Conference on Image and Signal Processing in Trois Rivieres, Quebec.

Slides:

  • [Slides in PDF (24.4MB)][Slides in DjVu (12.3MB)][Slides in ODP (Open Office / Open Document Format)(25.1MB)]

NASAIS&T Colloquium delivered atthe NASAGoddard Space Flight Center in Maryland (with video).

NASA has a link to thepresentation and a video of the talk.

Slides:

  • [Slides in PDF (29.3MB)][Slides in DjVu (15.2MB)][Slides in ODP (27.7MB)]
Direct directlink to the video webcast.

Talk given at Columbia University forthe NSF Workshop onHybrid Neuro-Computer Vision Systems at Columbia University. Theaudience was a mixture of neuroscientists, computer vision researchersand hardware experts.

Slides:

  • [Slides in PDF (19.7MB)][Slides in DjVu (10.4MB)]
Video from Columbia University site

Series of lectures atthe Microsoft/CIfARWinter School on Machine Learning and Computer Vision.

Slides:

  • Part 1: [ PDF][ ODP]
  • Part 2: [ PDF][ ODP]
  • Part 3: [ PDF][ ODP]

Two lectures given atthe 2009Sino-USA Vision-Learning Pattern Recognition Summer School thattook place at Peking University, Beijing.

  • [PDF (21.5MB)][DjVu (7.6MB)][ODP (15.2MB)] Deep Learning
  • [PDF (8.5MB)][DjVu (4.2MB)][ODP (15.0MB)] Other Methods and Applications of Deep Learning
  • [PDF (16.5MB)][DjVu (8.5MB)][ODP (12.0MB)] Learning Invariant Feature Hierarchies
  • [PDF (3.2MB)][DjVu (0.9MB)][ODP (41KB)] Future Challenges
  • [Slides in PDF (27.3MB)][Slides in DjVu (11.3MB)][Slides in ODP (Open Office / Open Document Format)(29.5MB)]
  • [Slides in PDF (26.2MB)][Slides in DjVu (10.8MB)][Slides in ODP (Open Office / Open Document Format)(29.4MB)]

Intelligent tasks, such as visual perception, auditory perception, andlanguage understanding require the construction of good internalrepresentations of the world. Internal representations (or 'features')must be invariant (or robust) to irrelevant variations of the input,but must preserve the information relevant to the task. An importantgoal of our research is to devise methods that can automatically learngood internal representations from labeled and unlabeled data.Results from theoretical analysis, and experimental evidence fromvisual neuroscience, suggest that the visual world is best representedby a multi-stage hierarchy, in which features in successive stages areincreasingly global, invariant, and abstract. The main question is howcan one train such deep architectures from unlabeled data and limitedamounts of labeled data.

Several methods have recently been proposed to train deeparchitectures in an unsupervised fashion. Each layer of the deeparchitecture is composed of a feed-forward encoder which computes afeature vector from the input, and a feed-back decoder whichreconstructs the input from the features. The training shapes anenergy landscape with low valleys around the training samples and highplateaus everywhere else. A number of such layers can be stacked andtrained sequentially. A particular class of methods for deepenergy-based unsupervised learning will be described that imposessparsity constraints on the features. When applied to natural imagepatches, the method produces hierarchies of filters similar to thosefound in the mammalian visual cortex. A simple modification of thesparsity criterion produces locally-invariant features with similarcharacteristics as hand-designed features, such as SIFT.

An application to category-level object recognition with invariance topose and illumination will be described. By stacking multiple stagesof sparse features, and refining the whole system with supervisedtraining, state-the-art accuracy can be achieved on standard datasetswith very few labeled samples. Another application to vision-basednavigation for off-road mobile robots will be shown. After a phase ofoff-line unsupervised learning, the system autonomously learns todiscriminate obstacles from traversable areas at long range usinglabels produced with stereo vision for nearby areas.

This is joint work with Y-Lan Boureau, Karol Gregor, Raia Hadsell,Koray Kavakcuoglu, and Marc'Aurelio Ranzato.

  • [PDF (16.2MB)][DjVu (7.3MB)][ODP (2.8MB)] Energy-Based Models
  • [PDF (14.9MB)][DjVu (5.9MB)][ODP (4.4MB)] Supervised Learning
  • [PDF (14.2MB)][DjVu (6.2MB)][ODP (4.6MB)] Manifold Learning
  • [PDF (20.9MB)][DjVu (9.1MB)][ODP (19.7MB)]: Deep Learning
  • [Slides in PDF (22.7MB)][Slides in DjVu (7.6MB)] [Slides in ODP (Open Office / Open Document Format)(19.4MB)]

67 minute Video on YouTube

Abstract: A long-term goal of Machine Learning research is tosolve highy complex 'intelligent' tasks, such as visual perceptionauditory perception, and language understanding. To reach that goal,the ML community must solve two problems: the Deep Learning Problem,and the Partition Function Problem.

There is considerable theoretical and empirical evidence that complextasks, such as invariant object recognition in vision, require 'deep'architectures, composed of multiple layers of trainable non-linearmodules. The Deep Learning Problem is related to the difficulty oftraining such deep architectures.

Several methods have recently been proposed to train (or pre-train)deep architectures in an unsupervised fashion. Each layer of the deeparchitecture is composed of an encoder which computes a feature vectorfrom the input, and a decoder which reconstructs the input from thefeatures. A large number of such layers can be stacked and trainedsequentially, thereby learning a deep hierarchy of features withincreasing levels of abstraction. The training of each layer can beseen as shaping an energy landscape with low valleys around thetraining samples and high plateaus everywhere else. Forming thesehigh plateaus constitute the so-called Partition Function problem.

A particular class of methods for deep energy-based unsupervisedlearning will be described that solves the Partition Function problemby imposing sparsity constraints on the features. The method can learnmultiple levels of sparse and overcomplete representations ofdata. When applied to natural image patches, the method produceshierarchies of filters similar to those found in the mammalian visualcortex.

An application to category-level object recognition with invariance topose and illumination will be described (with a live demo). Anotherapplication to vision-based navigation for off-road mobile robots willbe described (with videos). The system autonomously learns todiscriminate obstacles from traversable areas at long range.

This is joint work with Y-Lan Boureau, Sumit Chopra, Raia Hadsell,Fu-Jie Huang, Koray Kavakcuoglu, and Marc'Aurelio Ranzato.


A 15 minute Interview with Yann LeCun on machine learning research, lecturing styles, where NIPS is going, the philosophy of science, and various other topics.

Video and slides of a talk given at the 2007 NIPS workshop on Efficient Learning,in Vancouver, Canada, December 7, 2007.

I'll probably make a few friends with that one.

  • [Slides in DjVu(3.6MB)] [Slides in PDF (8.6MB)]

Click on the image at right to view the video of the talk (with lots of questions from the audience).


Who is Afraid of Non-Convex Loss Functions?

Slides and Video of a talk given at the NIPS satellite session on deep learning.in Vancouver, Canada, December 6, 2007.
  • [Slides in DjVu(5.8MB)] [Slides in PDF (8.9MB)] [Slides in ODP (4.0MB)]
Video of the talk: Part 1 (85.0MB), Part 2 (84.3MB)

(Part 2 also contains Martin Szummer's talk)

Other talks at that satellite session are available at the meeting's main web site.

Slides of a keynote talk given at the 2007 International Conference on Document Analysis and Recognition (ICDAR), in Curitiba, Brazil, September 24, 2007.

  • [Slides in DjVu(11.7MB)] [Slides in PDF (26.7MB)]
PAPER:
Yann LeCun, Sumit Chopra, Marc'Aurelio Ranzato and Fu-Jie Huang: Energy-Based Models in Document Recognition and Computer Vision, Proc. International Conference on Document Analysis and Recognition (ICDAR), 2007, [key=lecun-icdar-keynote-07].110KBDjVu
355KBPDF
551KBPS.GZ

Abstract:Over the last few years, the Machine Learning and Natural LanguageProcessing communities have devoted a considerable of work to learningmodels whose outputs are 'structured', such as sequences of charactersand words in a human language. The methods of choice includeConditional Random Fields, Hidden Markov SVMs, and Maximum MarginMarkov Networks. These models can be seen as un-normalized versions ofdiscriminative Hidden Markov Models. It may come to a surprise to theICDAR community that this class of models was originally developed inthe handwriting recognition community in the mid 90's to trainhandwritten recognition systems at word-level discriminatively. Thevarious approaches can be described in a unified manner through toconcept of 'Energy-Based Model' (EBM). EBMs capture depencies betweenvariables by associating a scalar energy to each configuration of thevariables. Given a set of observed variables (e.g an image), an EBMinference consists in finding configurations of unobserved variables(e.g. a recognized word or sentence) that minimize theenergy. Training an EBM consists in designing a loss function whoseminimization will shape the energy surface so that correct variableconfigurations have lower energies than incorrect configurations. Themain advantage of the EBM approach is to circumvent one of the maindifficulties associated with training probabilistic models: keepingthem properly normalized, a potentially intractable problem withcomplex models. Energy-Based learning has been applied withconsiderable success to such problems as handwriting recognition,natural language processing, biological sequence analysis, computervision (object detection and recognition), image segmentation, imagerestoration, unsupervised feature learning, and dimensionalityreduction. Several specific applications will be described (and, forsome, illustrated with real-time demonstrations) including: a checkreading system, a real-time system for simultaneously detecting humanfaces in images and estimating their pose; an unsupervised method forlearning invariant feature hierarchies; and a real-time system fordetecting and recognizing generic object categories in images, such asairplanes, cars, animal, and people.

Slides and audio podcast of a tutorial given at the 2007 IPAM Workshop on the Mathematics of Knowledge and Search Engines, September 14, 2007.

  • 1. Learning Similarity Metrics [ DjVu(6.1MB)] [PDF (14.2MB)]
  • 2. Supervised and Unsupervised Methods for Learning Invariant Feature Hierarchies [DjVu(13.1MB)] [PDF (29.4MB)]
  • Audio Podcast: [MP3 from NYU (58MB)][MP3 from IPAM (58MB)]

Slides of a tutorial given at the 2007 International Computer Vision Summer School, July 13, 2007.

  • [Slides in DjVu(12.9MB)] [Slides in PDF (30.0MB)] [Open Office .ODP (17.7MB)]
Slides and video of an invited talk given at the 2006 NIPS workshop'Learning to Compare Samples', December 8, 2006.
  • [Slides in DjVu(2.7MB)] [Slides in PDF (13.7MB)]
  • Video of the talk at VideoLectures.net.
  • [Slides in DjVu(6.5MB)] [Slides in PDF (23.6MB)] [OpenOffice .ODP (4.9MB)] [PowerPoint .PPT (9.5MB)]

The video of the tutorial is availablefrom the NIPS website in .mov (QuickTime) format at several resolutions:

  • 320x240.
  • 640x480.
  • 900x600.
  • [Slides in DjVu(8.4MB)] [Slides in PDF (25.5MB)]

Slides of a 3-hour tutorial given by Yann LeCun at the 2006 CIARSummer School: Neural Computation & Adaptive Perception, at University of Toronto.This is new version of the tutorial based on the paper A Tutorial onEnergy-Based Learning. This is considerably reworked from2005 IPAM version.

  • A Tutorial on Energy-Based Learning:[Slides in DjVu (5.2MB)][Slides in PDF (18.2MB)]
  • Deep Learning for Generic Object Recognition:[Slides in DjVu (3.8MB)][Slides in PDF (11.6MB)]

Supervised and Unsupervised Learning with Energy-Based Models: [Slides in DjVu (6.9MB)][Slides in PDF (20.6MB)]

This is a good overview of research at CBLL: energy-based learning,object recognition, face detection, unsupervised feature learning,and robot vision and navigation.

Djview4

Slides and Videos of a 4-hour tutorial given by Yann LeCun at the 2005 IPAM GraduateSummer School: Intelligent Extraction of Information from Graphs andHigh Dimensional Data, at IPAM/UCLA. Streaming videos of all thetalks are available from the IPAM web site in RealVideo format.

The tutorial includes 4 talks:

  • Energy-Based Models part 1, Introduction:
    [Streaming Video of the Talk][Slides in DjVu (2.2MB)][Slides in PDF (4.0MB)]
  • Energy-Based Models part 2, Architectures and Loss functions:
    [Streaming Video of the Talk][Slides in DjVu (3.7MB)][Slides in PDF (5.3MB)]
  • Architectures for Invariant Image Recognition, Convolutional Networks:
    [Streaming Video of the Talk][Slides in DjVu (4.7MB)][Slides in PDF (11.9MB)]
  • Trainable Dissimilarity Metrics, Segmentation, Sequence Labeling, Graph Transformer Networks:
    [Streaming Video of the Talk][Slides in DjVu (2.1MB)][Slides in PDF (5.7MB)]
Slides and videos of a 4-hour tutorial given by Yann LeCun at theLearning Theory SummerSchool organized by the ToyotaTechnological Institute in Chicago. The tutorial includes 3 talks.The 3 videos are available fromVideoLectures.net.
  • Energy-Based Models: [DjVu (5.5MB)][PDF (8.2MB)]
  • Invariant Recognition, Convolutional Networks: [DjVu (5.2MB)][PDF (13.2MB)]
  • Graph Transformer Networks: [DjVu (1.1MB)][PDF (2.1MB)]
VideoLectures.net also has videos of two 'lunch-time debates' (or panel discussions) in which Yann was a participant, together with Rob Schapire, David McAllester, Yasemin Altun, Mikhail Belkin, Yoram Singer, and John Langford.

Djvu Mac

  • View a Video of Yann's talk at MSRI:
    • Download QuickTime MPEG4 File from MSRI (508MB)
    • Download QuickTime MPEG4 File from NYU (508MB)
  • Slides:[DjVu (4.8MB)][PDF (9.8MB)]
A 1-hour talk (with a video) of a Distinguished Lecture given on October 22, 2001at the University of Illinois, Urbana-Champaign.

Name

djview4 - Standalone DjVu viewer

Synopsis

djview4 [options] [argument]

Description

Standalone viewer for DjVu files. Features include navigating documents, zooming and panningpage images, producing and displaying thumbnails, displaying document outlines, searching documents for particular words in the hidden text layer, copyinghidden text to the clipboard, saving pages and documents as bundled or indirect multi-page files, and printing page and documents. The viewer cansimultaneously display several pages using a side-by-side or a continuous layout.

Command Line Argument and Options

This program can run as a standalone program or as a slave process forthe DjVu browser plugin nsdejavu. When running as a standalone program, the command line argument argument can be:
URL of the form:
file:///path/name.djvu[?djvuopts&keyword=value&...]
The square brackets delimit the optional components of the
URL. Various options can be specified using a syntax similar to that of CGI arguments. Specifying options inthis manner is very useful for a browser plugin because there are no command line arguments. In the case of a standalone viewer, all options can be specifiedas command line arguments.
*
An remote DjVu document URL of the form:
http://host/path/name.djvu[?djvuopts&keyword=value&...]
Browsing remote DjVu documents with the standalone viewer
is less efficient than using the browser plugin. The standalone viewer does not benefit from the browser caching strategies and proxy settings. Proxysettings for the standalone viewer can be set independently with the preferences dialog.

An extensive list of options are recognized. Most options can be specified as command line argument starting with the customary dash character (-) orusing the syntax of CGI arguments in the document URL. Some options however are only meaningful as command linearguments. Other options are only recognized when running the X11 version of the djview4 program.

Command Line Options

The following options are only meaningful when specified on the command line.
X11 version ofthe djview4 program. Unlike most djview4 options, X11 options that demand an argument do not use the equal character to introduce their argument.
URL using a syntax similar to that of CGI arguments
CGI arguments from these options, the viewer only recognizes keywords that appear after the worddjvuopts. The keywords key are derived from the option names by removing the initial dashes.

-page=pagename
Display a specific document page. The viewer first searches a page whose identifier matches the argument pagename. Otherwise, if pagename isa number preceded by character + or -, the viewer performs a displacement relative to the current page. Otherwise, starting from the current pageand wrapping around, it searches for a page whose title matches the argument pagename. Otherwise, if pagename is numerical, it is interpreted asan ordinal page number. Otherwise, and finally, the viewer searches a page whose name matches pagename.
-pageno=pagenumber
The page searching algorithm for option page can cause ambiguities when page titles can be interpreted as numbers. The argument of optionpageno is always interpreted as an ordinal page number. This option is less portable than page because it is not recognized by earlier versionsof the djvu plugin. When using this option is necessary, it is advisable to use both the page and pageno options.
-zoom=zoomfactor
Specify the initial zoom factor. Unless the toolbar, pop-up menu and keyboard are disabled, the user will be able to change the zoom factor. Legal valuesfor zoomfactor are shown in the below:
numberMagnification factor in range 10% to 999%.
one2oneSelect the 'one-to-one' mode.
widthSelect the 'fit width' mode.
pageSelect the 'fit page' mode.
stretchStretch the image to the plugin window size.
-showposition=px,py
Specify a point in the current page that should be as close as possible to the center of the window. The horizontal and vertical positionspx,py in the current page are given as fractions in range 0 to 1. For instance, 0,0 designates the upper left corner of the page,0.5,0.5 is the center, and 1,1 is the lower right corner.
-mode=modespec
Specify the initial display mode. Unless the toolbar and pop-up menu are disabled, the user will be able to change it. Legal values for modespec areshown in the below:
colorDisplay the color image.
bwDisplay the foreground mask only.
foreDisplay the foreground only.
backDisplay the background only.
textOverlay the hidden text over the color image.
-hor_align=keyword, -halign=keyword
Specify the horizontal position of the page in the viewer window. (This does not specify what part of the page will be shown, but rather how margins willbe laid out around the page in the plugin window.) Argument keyword must be left, center, or right.
-ver_align=keyword, -valign=keyword
Specify the vertical position of the page in the viewer window. (This does not specify what part of the page will be shown, but rather how margins will belaid out around the page in the plugin window.) Argument keyword must be top, center, or bottom.
-cache=(yes|no)
Enable or disable the caching of fully decoded pages of the document. Caching is on by default. Caching of documents whose URL doesnot contain an extension .djvu or .djv is off by default.
-continuous=(yes|no)
Enable or disable the continuous layout of multipage documents.
-sidebyside=(yes|no), -side_by_side=(yes|no)
Enable or disable the side-by-side layout of multipage documents.
-coverpage=(yes|no)
Specify whether the cover page must be displayed alone when multipage documents are shown in side-by-side layout.
-righttoleft=(yes|no)
Specify whether pages should be arranged right-to-left when multipage documents are shown in side-by-side layout.
-layout=keyword{,keyword}
Specify the layout settings using a single list of comma-separated keywords. The following keywords are recognized:
singleDisable both the side-by-side and continuous modes
doubleEnable the side-by-side mode.
continuousEnable the continuous mode.
cover, nocoverSpecify whether the cover page should be left alone.
ltor, rtolSpecify the left-to-right or right-to-left layout.
gap, nogapSpecify whether there is a gap between pages.
-scrollbars=(yes|no)
Enable or disable the presence of scroll bars when the full image size exceeds the plugin window size. The default is yes.
-frame=(yes|no)
Enable or disable the display of a thin frame and shadow around the DjVu images. Frames are enabled by default.
-background=color
Specify the color of the background border displayed around the document. The color color must be given in hexadecimal RRGGBBor #RRGGBB format.
-toolbar=keyword{(,|+|-)keyword}
Controls the appearance and the contents of the toolbar. The argument of option toolbar is composed of a number of keywords separated by characterscomma, plus or minus. The appearance of the toolbar is controlled by keywords placed before the first occurrence of a character plus or minus. The followingkeywords are recognized in this context:
noDisable toolbar.
alwaysDisplays the toolbar.
autoEnable toolbar 'autohide' mode (not implemented).
topPlace toolbar along the top edge.
bottomPlace toolbar along the bottom edge.
The contents of the toolbar is controlled by keywords
placed after the first occurrence of a character plus or minus. Each keyword adds (after a plus) or removes (after a minus) a particular toolbar button orgroup of buttons. The initial content of the toolbar is determined by the first occurrence of a character plus or minus. When this is a plus, the toolbar isinitially empty. When this is a minus, the toolbar initially contains the default selection of buttons.

The following keywords are recognized:

modecombofor the display mode selection tool.
zoomcombofor the zoom selection tool.
zoomfor the zoom buttons.
selectfor the selection button.
rotatefor the image rotation buttons.
findfor the text search button.
newfor the new window button.
openfor the open new document button.
savefor the save button.
printfor the print button.
layoutfor the page layout buttons.
pagecombofor the page selection tool.
firstlastfor the first-page and last-page buttons.
prevnextfor the previous- and next-page buttons.

For the sake of backward compatibility, the keywords fore, fore_button, back, back_button, bw, bw_button,color, and color_button are interpreted like keyword modecombo; the keyword rescombo is a synonym of zoomcombo; the keywordspan, zoomsel, and textsel are interpreted like keyword select; and the keyword doublepage is interpreted like keywordlayout. All other keywords are ignored.

-menubar=(yes|no)
Enable or disable the presence of the menu bar located on top of the window.
-statusbar=(yes|no)
Enable or disable the presence of the status bar located at the bottom of the window.
-sidebar=keyword{,keyword}
Control the dockable panels. The argument is a comma separated list of keywords. A first group of keywords selects which panels are affected. Omittingthese keywords selects all panels. A second group of keywords then controls the visibility and the position of the selected panels.
thumbnailsspecify the thumbnail panel.
outline,bookmarksspecify the outline panel.
search,findspecify the search panel.
yes,trueshow the specified panels (default).
no,falsehide the specified panels.
leftdock specified panels on the left side.
rightdock specified panels on the right side.
topdock specified panels on the top side.
bottomdock specified panels on the bottom side.
-thumbnails=keyword{,keyword}
Compatibility alias for -sidebar=thumbnails,keyword{,keyword}.
-outline=keyword{,keyword}
Compatibility alias for -sidebar=outline,keyword{,keyword}.
-menu=(yes|no)
Enable or disable the pop-up menu.
-keyboard=(yes|no)
Enable or disable the DjVu plugin keyboard shortcuts. The default is yes (enabled).
-mouse=(yes|no)
Enable or disable mouse interaction for panning and selecting. The default is yes (enabled).
-links=(yes|no)
Enable or disable hyper-links in the DjVu image. Hyper-links are enabled by default.
-highlight=x,y,w,h[,color]
Display a highlighted rectangle at the specified coordinates in the current page and with the specified color. Coordinates x, y, w,and h are measured in document image coordinates (not screen coordinates). The origin is set at the bottom left corner of the image. The colorcolor must be given in hexadecimal RRGGBB or #RRGGBB format. Multiple highlighted zone can be specifiedand can be interspersed with multiple -page=pagename options.
-find=text
Highlight occurrences of the given string text. This option works when the document contains a hidden text layer. It can be used in conjunction with-sidebar=find to display the text searching interface.

String text can be terminated by slash (/) followed by letters specifying search options. The following letters are recognized

cCase-sensitive search.
CCase-insensitive search (default).
wSearch hits start on word boundaries (default).
WIgnore word boundaries.
rRegular expression search.
RString search (default).

Dj View

-rotate=(0|90|180|270)
Rotate the djvu image by the specified angle expressed in degrees counter-clockwise.
-print=(yes|no)
Enable or disable printing the DjVu document. Printing is enabled by default.
-save=(yes|no)

Djview4 For Mac Pro

Enable or disable saving the DjVu document. Saving is enabled by default.
-passive
Cause the DjVu image to be displayed in a manner similar to an ordinary web image. The default zoom factor is changed to page. The toolbar, thestatus bar, the scrollbars, the menus, and the keyboard shortcuts are disabled.
-passivestretch
Djview4
Cause the DjVu image to be displayed in a manner similar to an ordinary web image. The default zoom factor is changed to stretch. The toolbar, thestatus bar, the scrollbars, the menus, and the keyboard shortcuts are disabled.
-nomenu, -notoolbar, -noscrollbars
These options were recognized by some versions of the DjVu viewers and are honored for the sake of backward compatibility. A warning message is printedwhen option -verbose is active.
-logo, -textsel, -search
These options were recognized by some versions of the DjVu viewers but are currently not implemented by djview4. A warning message is printed whenoption -verbose is active.

Usage

Most features can be accessed using the menus, the toolbar, the side bar or the pop-up menu shown when the right mouse button is depressed over a DjVuimage. Detailled help can be accessed by clicking the contextual help icon from the toolbar and then clicking on various section of the djview user interface.

The following table lists some useful key combinations recognized when the djvu document is active:

KeyAction
SHIFT+F1Activate the contextual help.
1,2, and 3Change zoom to to 100%, 200% and 300%.
Up, Down, Left, RightScroll the image in the given direction.
HomeDisplay top left corner of the image.
EndDisplay bottom right corner of the image.
Control+HomeGo to the beginning of the multi-page document.
Control+EndGo to the end of the multi-page document.
Space, ReturnScroll down or go to next page.
BackspaceScroll up or go to previous page.
Page DownGo to the next page.
Page UpGo to the previous page.
+, -Zoom in and out.
[, ]Rotate image.
WSelect the 'Fit Width' zooming mode.
PSelect the 'Fit Page' zooming mode.
CTRL+F, F3Search the hidden text layer.

Handy effects can be achieved by holding modifier keys. Although these keys are configurable from the preference dialog, the following table lists thedefault assignments

KeyAction
CTRL+SHIFTHold these keys to show the magnification lens.
CTRLHold this key to select an area with the mouse.
SHIFTHold this key to display all hyperlinks.

Credits

This program was written by L'eon Bottou <leonb@users.sf.net> and is distributed under theGNU General Public License. This program includes code derived from program tiff2pdf, written by Ross Finlayson and released under a BSD license.

See Also

djvu(1), ddjvu(1), nsdejavu(1),djview3(1), tiff2pdf(1)