More

Is it possible to set the permissions Geoserver creates new directories with?

Is it possible to set the permissions Geoserver creates new directories with?


I'm trying to automate the process of pushing from a server to Github as a way of tracking Geoserver changes, and pulling from Github as a way of deploying new work. The push side works fine, but if I create a new layer in Geoserver, and then try to edit that elsewhere and pull changes back to the server it fails.

Steps:

  1. Publish new layer in geoserver
  2. Push that from the server, pull it to my own machine
  3. Make any changes - e.g. adding height.ftl or description.ftl
  4. Push that from my machine, try to pull it back on the server.

At this point, I get a "permission denied" error, and the issue seems to be that Geoserver isn't giving these directories group write permissions; specifically it's setting:drwxr-xr-x

A manualchmod g+wfixes this, but I'm trying to get the process more automated. Is there a way to set the permissions with which Geoserver creates these directories?

[adding with later edit: it occurred to me to check theumasksettings, and that seems to be set to0002for all of my user account, the user Geoserver runs as, and the user Github runs as. They're also all members of each others' groups. If I've understood this correctly, it means the server's default behaviour is to give group write permission to new directories and files, so Geoserver must be explicitly withholding that permission.]


GeoServer does not do anything to control the permissions of the directories, so it gets the default permissions the OS would assign. I believe you have to change the umask for the user that's running the GeoServer process: http://www.cyberciti.biz/tips/understanding-linux-unix-umask-value-usage.html


Overview

This proposal aims to reuse as much as possible the GeoServer components, plugins and capabilities already in place in order to implement the A&A layer for the GeoNode resources.

This proposal aims at refactoring of the security integration between GeoServer and GeoNode reusing, where possible, available GeoServer capabilities either via the core version or via existing plugins or creating extensions that would live in the GeoServer codebase where needed. The goal is to improve the maintainability and compatibility of the integration between GeoServer and Geonode by having GeoNode rely as much as possible on standard GeoServer plugins.

The basic idea is the following one:

Authentication

The proposal is enable GeoNode to become an OpenID Connect Provider while GeoServer to become an OpenID Connect Consumer instead.
The OpenID Connect protocol makes use of tokens in order to entrust the users’ identities. This would allow us to avoid using the obsolete cookies-based mechanism.

Authorization

The Authorization rules for the resources created by GeoNode must already be configured in the GeoServer Catalog, and must be associated to the users’ roles.
The GeoNode Administrators do not have to configure them manually, this can be done automatically by GeoNode through the GeoFence Embedded plugin, which overrides and enhances the GeoServer Authorization subsystem and exposes a REST api to allow remote control of auth rules on the catalog.
Every time a GeoNode user changes the permissions of a GeoNode Resource published in GeoServer, GeoNode should automatically update the GeoServer access rules via REST calls.

It is worth noting that the GeoNode command-line APIs should also be updated in order to synchronize and clean-up the permissions on GeoServer whenever some issue occurs and/or the Authorization rules are out of sync.


Is it possible to set the permissions Geoserver creates new directories with? - Geographic Information Systems

Much of the inner workings of geographic information systems is organized around data models: computational structures (rasters and vectors are common variants) that determine how GIS stores, organizes, and displays various types of information for different purposes. Put simply, data models treat the world in terms of objects that represent entities and their related attributes. In GIS, there is usually no dedicated model of the processes that govern dynamics, adaptation, and evolution of a system. For many years, GIS has advanced the potential for unifying representations of entities and processes, and recently, the long-standing promise of consociating the two is beginning to be realized, enabling a burgeoning paradigm shift to a new style of GIS.

The next generation of geographic information systems will be driven by process models. These are usually composed of algorithms and heuristics that will act on users' requests for the GIS to perform some service for them, connect to digital networks to contextualize those requests, and interact seamlessly with other databases and processes to achieve users' goals. Alternatively, process models may be used as a synthetic representation of system parts to build artificial phenomena "in silico" that can be subjected to experimentation and what-if scenario building in ways that are not possible "on the ground." Geoprocessing has been featured with increasing priority in GIS for some time, and conventional GIS already relies on geoprocessing for spatial analysis and data manipulation.

Process models represent an evolution from these existing technologies, catalyzed by artificial intelligence that takes traditional GIS operations into the world of dynamic, proactive computing on a semantic Web of interconnected data and intelligent software agents. Imagine, for example, building a representation of the earth's boundary layer climate in GIS, but also being able to run dynamic weather patterns, storms, and hurricanes over that data, using climate models that sit in a supercomputing center on another continent. This article charts the development of process models in the geographic information sciences and discusses the technologies that have shaped them from the outside in. In addition, it explores their future potential in allying next-generation GIS to the semantic Web, virtual worlds, computer gaming, computational social science, business intelligence, cyberplaces, the emerging "Internet of Things," and newly discovered nanospaces.

Background

Much of the innovation for process models in the geographic sciences has come from within the geographic information technology community. Geoprocessing featured prominently in the early origins of online GIS, where server-based GIS delegated much of the work that a desktop client would perform to the background, hidden from the user. Interest in geoprocessing has resurfaced recently, largely because of increased enthusiasm for online cartography and expanding interest in schemes for appropriating, parsing, and reconstituting diverse data sources from around the Web into novel mashups that lean on application programming interfaces&mdashinterfaces to centralized code bases&mdashthat have origins in search engine technology.


The cloud of Wi-Fi signals that envelops central Salt Lake City, Utah, generated by approximately 1,700 access points.

Concurrently, many scholars in the geographic information science community have been developing innovative methods for fusing representations of space and time in GIS. This has seen the infusion of schemes from time geography into spatial database and data access structures to allow structured queries to be performed on data's temporal, as well as spatial, attributes. Time geography has also been used in geovisualization, as a method for representing temporal attributes of datasets spatially, thereby allowing them to be subjected to standard spatial analysis. Much of this work has been based around a move toward creating cyberinfrastructure for cross-disciplinary research teams, and significant advances have been made in developing technologies to fuse GIS with real-time data from the diverse array of interconnected sensors and broadcast devices that now permeate inventory systems, long-term scientific observatories, transportation infrastructure, and even our personal communication systems. In parallel, work in spatial simulation has edged ever closer toward a tight coupling with GIS, particularly in high-resolution modeling and geocomputation using cellular and agent-based automata as computational vehicles for animating objects through complex adaptive systems. Automata are, essentially, empty data structures capable of processing information and exchanging it with other automata. Simulation builders often turn to GIS routines in search of algorithms for handling the information exchange between automata, and over time, a natural affinity between the two has begun to develop into a mutually influential research field often referred to as geosimulation.

Much of the work in developing process models is finding its way into GIS from outside fields, however, and developments in information technology for the Web&mdashand for handling geographic data on the Web&mdashhave been particularly influential. A massive growth in the volume and nature of data in which we find our lives and work enveloped has catalyzed a transition from a previous model of the Web to a newer-generation phase. The Web remains fundamentally the same in its architecture, but the number of applications and devices that contribute to it has swelled appreciably, and with this shift, a phase change has taken place, instantiating what is now commonly referred to as Web 2.0. The previous iteration of Web development was centered on static, subscription-based content aggregated by dominant portals such as AltaVista, AOL, Excite, HotBot, Infoseek, Lycos, and Yahoo! By comparison, much of the current generation model for the Web is characterized by user-generated content (blogs, Twitter tweets, photographs, points of interest, even maps) and flexible transfers between diverse data sources. Moreover, these varied data streams interface seamlessly over new interoperable database and browser technologies and are often delivered in custom-controlled formats directly to browsers or handheld devices via channels such as Really Simple Syndication (RSS). This takes place dynamically, updating in near real time as the ecology of the Web ebbs and flows.

Enveloping these developments has been a groundswell in the volume of geographic data fed to the Web. In many ways, Web 2.0 has been built on the back of the GeoWeb that has formed between growing volumes of location-enabled devices and data that either interface with the Web in standardized exchanges (uploading geotagged content to online data warehouses, for example) or rely on the Web for their functionality (as in the case of alternative positioning systems that triangulate their location based on wireless access points). The reduction in the cost of geographic positioning technologies led to the massive infusion of location-aware technology into cameras, phones, running shoes, and cars atop bicycle handlebars and in clothing, pets, handheld gaming devices, and asset-tracking devices on the products that we buy in supermarkets. Devices all over the world began to sense and communicate their absolute and relative positions, allowing, first, the devices to be location tagged second, those tags to become a significant medium for organizing, browsing, searching, and retrieving data and, third, their relative geography to become the semantic context that ascribes to those objects (and their users) information. Indeed, for many online activities, maps and GIS have become the main portal to the Web.

Semantic intelligence is driving the next evolution of the Web, characterized by the use of process models (usually referred to as software agents or Web services) as artificial intelligence that can reason about the meaning of data that courses through Internet and communication networks. A slew of ontological schemes&mdashmethods for classifying data and its relationships&mdashprovides the scaffolding that supports semantic reasoning online. Geography and location ontology is an important component of online semantics, allowing processes to not only know where something is in both network space and the tangible geography of the real world but also to reason about where it might have been, where it might go and why, whether that is usual or unusual behavior, what might travel with it, what might be left behind, what activities it might engage in along the way or when it reaches its destination, and what services might be suggested to facilitate these activities. Often, these may be location-based services that make use of the geographic position of a device, its user, or the local network of related devices, or they may make use of the network to deliver "action at a distance" to enrich a user's local experience, by connecting the user to friends across the world, for example.

Process models have also been developed in other information systems. Much of the potential for advancing geographic information technology stems from the ability of GIS to interface with other processes and related informatics through complementary process modeling schemes. The early precursors of this interoperability are already beginning to take shape through the fusion of GIS and building information models (BIMs). BIMs offer the ability of urban GIS to focus attention on a much finer resolution than ever, to the scale of buildings' structural parts and their mechanical systems. GIS allows BIMs to consider the role of the building in a larger urban, social, geological, and ecosystem context. When process models are added to the mix, the complementary functionality expands even farther. Consider, for example, the uses of a GIS that represents the building footprints of an entire city but can also connect to building information models to calculate the energy load of independent structures for hundreds of potential weather scenarios, or BIMs that can interact with an earthquake simulation to test building infrastructural response to subsurface deformation in the bedrock underneath, using cartography to visualize cascading envelopes of projected impact for potential aftershocks.

Virtual Worlds

Many advocates of the semantic Web envision a massive dynamic system of digitally networked objects and people, continuously casting "data shadows" with enough resolution and fidelity to constitute a virtual representation of the tangible world. These virtual worlds are already being built, and many people and companies choose to immerse themselves in online virtual worlds and massively multiplayer online role-playing gaming (MMORPG) environments for socializing, conducting business, organizing remotely, collaborating on research projects, traveling vicariously, and so on.

Here, process models are also driving advances in technology. Process models from computer gaming engines have been ported to virtual worlds, to populate them with automated digital assistants and synthetic people that behave and act realistically and can engage with users in the game world in much the same way that social interactions take place in the real world. Virtual worlds have been coupled with realistic, built and natural environment representations constructed using geometry familiar to GIS. The current generation of process models for MMORPG environments is relatively simple in its treatment of spatial behavior, but rapid advances are being made in infusing them with a range of behavioral geographies and spatial cognitive abilities that will enable more sophisticated spatial reasoning to be included in their routines.


Illustration by Suzanne Davis, Esri

Gaming is just one application of process models in virtual worlds. The actions and interactions of synthetic avatars representing real-world people can be traced with perfect accuracy in virtual worlds because they are digital by their very nature, and often, that data may be associated with the data shadows that users cast from their real-world telecommunications and transactional activities in the tangible world. Virtual worlds are seen by many as terra novae for new forms of retailing, marketing, research, and online collaboration in which avatar representations of real people mix with process models that study them, mimic missing components of their synthetic physical or social environments, mine data, perform calculations, and reason about their actions and interactions.

Code Space

Aspects of the semantic Web may seep into the real world, from cyberspace to "meatspace." In many ways, the distinction between the two has long ago blurred, and for many of us, our lives are already fully immersed in cyberplaces that couple computer bits and tangible bricks, and we find much of our activity steeped in flows of information that react to our actions and often shape what we do. Geographers have begun to document the emergence of what we might term a "code space," a burgeoning software geography that identifies us and authenticates our credentials to access particular spaces at particular times and regulates the sets of permissions that determine what we might do, and with whom, while we are there. Commercial vehicle traffic for interstate commerce, commuter transit systems, and airports are obvious examples of code space in operation in our everyday lives. Mail systems transitioned fully to coded space a long time ago: for parcel delivery services, almost every object and activity can be identified and traced as it progresses through the system, from collection to delivery on our doorstep. Other code spaces are rapidly moving to the foreground: patients, doctors, and supplies are being handled in a similar fashion in hospitals. Goods in supermarkets and shopping malls are interconnected through intricate webs of bar codes, radio-frequency identification (RFID) tags, and inventory management systems that reason about their position in a network of stores and even the supply geography of individual packets on a shelf. Similarly, transactions may be tagged at the point of sale and associated uniquely to customers using loyalty, debit, and credit cards that also link customers to their neighbors at home and similar demographic groupings in other cities, using sophisticated geodemographic analyses. The influence between location-aware technology and sociology is also beginning to reverse. Other code spaces facilitate the emergence of "smart mobs" or "flash mobs," social collectives organized and mediated by Internet and communications technologies: text messaging, instant messaging, and tweeting, for example, for the purposes of political organization social networking or, as is often the case, simple fun.

The Internet of Things

In technology circles, objects in a code space are referred to as "spimes," artifacts that are "aware" of their position in space and time and their position relative to other things spimes also maintain a history of this location data. The term spime has arisen in discussions about the emergence of an Internet of Things, a secondary Internet that parallels the World Wide Web of networked computers and human users. The Internet of Things is composed of (often computationally simple) devices that are usually interconnected using wireless communications technologies and may be self-organizing in formation. While limited individually, these mesh networks adopt a collective processing power that is often greater than the sum of its parts when their independent process models are networked as large "swarms" of devices. Moreover, swarm networks tend to be very resilient to disruption, and their collective computational and communication power often grows as new devices are added to the swarm. Networks of early-stage spimes (proto-spimes) of this kind have already been developed using, for example, microelectromechanical systems (MEMS), which may be engineered as tiny devices that are capable of sensing changes in electrical current, light, chemistry, water vapor, and so on, in their immediate surroundings. When networked together in massive volumes, they can be used as large-geography sensor grids for earthquakes, hurricanes, and security, for example. Sensor readings can be conveyed in short hops between devices over large spaces, back to a human observer or information system for analysis. MEMS often contain a conventional operating system and storage medium and can thus also perform limited processing on the data that they collect, deciding, for example, to take a photograph if particular conditions are triggered, and geotagging that photograph with a GPS or based on triangulation with a base station.

Geodemographics and Related Business Intelligence

The science and practice of geodemographics are concerned with analyzing people, groups, and populations based on tightly coupling who they are with where they live. The who in this small formula can provide information about potential debtors', customers', or voters' likely economic profile, social status, or potential political affiliation on current issues, for example. The where part of the equation is tasked with identifying what part of a city, postal code, or neighborhood those people might reside in, for the purposes of allying them to their neighboring property markets, crime statistics, and retail landscapes, for example. Together, this allows populations and activities to be tagged with particular geodemographic labels or value platforms. These tags are used to guide a host of activities, from drawing polling samples to targeting mass mailing campaigns and siting roadside billboards. The dataware for geodemographics traditionally relied on mashing up socioeconomic data collected by census bureaus and other groups with market research and point-of-sale data gathered by businesses or conglomerates. Traditionally, the science has been relatively imprecise and plagued with problems of ecological fallacy in relying on assignment of group-level attributes to individual-level behavior. Because of early reliance on data from census organizations, which aggregate returns to arbitrary geographic zones, the spatial components of geodemographics have also suffered from problems of modifiable areal units (i.e., there are an almost infinite number of ways to delineate a geographic cluster). Data is often collected for single snapshots in time and is subject to serious problems of data decay households, for example, may frequently move beyond or between lifestyles or trends, without adequate means in the geodemographic classification system to capture that transition longitudinally.

Process models could change geodemographics. When users browse the Web, their transactions and navigation patterns, the links they click, and even the amount of time that their mouse cursor hovers over a particular advertisement can be tracked and geocoded uniquely to their machine. Users' computers can be referenced to an address in the Internet protocol scheme, which can be associated to a tangible place in the real world using reverse geocoding. Along retail high streets and in shopping malls, customers now routinely yield a plethora of personal information in return for consumer loyalty cards, for example, or share their ZIP Codes and phone numbers at the point of sale, in addition to passively sharing their names when using credit or debit cards. By simply associating an e-mail address to this data, it is relatively straightforward, in many cases, to cross-reference one's activity in the tangible world with one's data shadow in cyberspace. Developments in related retail intelligence, business analytics, inferential statistics, and geocomputing have increased the level of sophistication with which data can be processed, analyzed, and mined for information. This allows the rapid assessment of emerging trends and geodemographic categories. Process models are even coded into the software at cash registers in some instances.

Much of this technology is allied to spimes and code spaces. Technologies based around RFID and RFID tagging, initially designed for automated stock taking in warehouses and stores, are now widely embedded in products, cards (and therefore wallets), and the environment with such pervasiveness that they enable widespread activity and interaction tracking, particularly within a closed environment such as a supermarket. Coupled to something like a customer loyalty card, these systems allow for real-time feeds of who is interacting (or not) with (not just buying, but handling, or even browsing) what products, where, when, with what frequency, and in what sequences. The huge volumes of data generated by such systems provide fertile training grounds for process models.

The increasing fusion of mobile telecommunication technologies with these systems opens up a new environment for coupling process models to mobile geodemographics. This is a novel development for two main reasons. First, it creates new avenues of inquiry and inference about people and transactions on the go (and associated questions and speculations regarding where they may have been, where they might be going, with whom, and to do what). Second, it allows geodemographic analysis to be refined to within-activity resolutions. This has already been put to use in the insurance industry, for example, to initiate pay-as-you-go vehicle coverage models, using GPS devices that report location information to insurance underwriters. Mobile phone providers have also experimented with business models based around location-based services and location-targeted advertising predicated on users' locations within the cell-phone grid, and groups have already begun to experiment with targeting billboard and radio advertising to individual cars based on similar schemes. New GIS schemes based around space-time process models and events are well positioned to interact with these technologies.

Nanosystems

When spimelike devices are built at very small geographies, capable of sensing and even manipulating objects at exceptionally fine scales, they become useful for nanoengineering. In recent years, there has been a massive fueling of interest in nanoscale science and development of motors, actuators, and manipulators at nanoscales. With these developments have come a veritable land grab and gold rush for scientific inquiry at hitherto relatively underexplored scales: within the earth, within the body, within objects, within anything to be found between 1 and 100 nanometers. Geographers missed out on the last bonanza at fine scales and were mostly absent from teams tasked with mapping the genome. The cartography required to visually map the genome is trivial and the processes that govern genomic patterns are completely alien to most geographers' skill sets, so their exclusion from these endeavors is understandable. The science and engineering surrounding nanotechnology differ from this situation, however, in that they are primarily concerned with spatiotemporal patterns and processes and the scaling of systems to new dimensions. These areas of inquiry are part of the geographer's craft and fall firmly within the domain of geographic information technologies. Process models with spatial sensing and semantic intelligence could play a vital role in future nanoscale exploration and engineering.

Computational Social Science

Geographic process models also offer tremendous benefits in supporting research and inquiry in the social sciences, where a new set of methods and models has been emerging under the banner of computational social science. Computational social science, in essence, is concerned with the use of computation&mdashnot just computers&mdashto facilitate the assessment of ideas and development of theories for social science systems that have proved to be relatively impenetrable to academic inquiry by traditional means. Usually, the social systems are complex and nonlinear and evolve through convoluted feedback mechanisms that render them difficult or impossible to analyze using standard qualitative or quantitative analysis. Computational social scientists have, alternatively, borrowed ideas from computational biology to develop a suite of tools that will allow them to construct synthetic social systems within a computer, in silico, that can be manipulated, adapted, accelerated, or cast on diverging evolutionary paths in ways that would never be possible in the real world.

The success of these computational experiments relies on the ability of computational social science to generate realistic models of social processes, however, and much of the innovation in these fields has been contributed by geographers because of their skills in leveraging space and spatial thinking as a glue to bind diverse cross-disciplinary social science. Much of computational social science research involves simulation-building. To date, the artificial intelligence driving geography in these simulations has been rather simplistic, and development in process models offers a potential detour from this constraint. Moreover, computational social science models are often developed at the resolution of individual people and scaled to treat massive populations of connected "agents," with careful attention paid to the social mechanisms that determine their connections. This often requires that large amounts of data be managed and manipulated across scales, and it is no surprise that most model developers turn to GIS for these tasks. Connections between agent-based models and GIS have been mostly formulated as loose couplings in the past, but recent developments have seen functionality from geographic information science built directly into agent software architectures, with the result that agents begin to resemble geographic processors themselves, with realistic spatial cognition and thinking. These developments are potentially of great value in social science, both in providing new tools for advanced model building and in infusing spatial thinking into social science generally. At the same time, developments in agent-based computing have the potential to feed back into classic GIS as architectures for reasoning about and processing human environment data.

Prologue

This is a wonderful time to be working with or developing geographic information technologies, at the cusp of some very exciting future developments that will bring GIS farther into the mainstream of information technology and will infuse geography and spatial thinking into a host of applications. Of course, some potential sobering futures for these developments should be mentioned. As process models are embedded in larger information, technical, or even sociotechnical systems, issues of accuracy, error, and error propagation in GIS become even more significant. Ethical issues surrounding the use of fine-grained positional data also become more complex when allied with process models that reason about the significance or context of that data. Moreover, the reliability of process models as appropriate representations of phenomena or systems must come under greater scrutiny.


1 Answer 1

Setguid

There are 2 forces here at work. The first is the setgid bit that's enabled on the folder, folder .

That's the s in the pack of characters at the beginning of this line. They're grouped thusly:

The r-s means that any files or directories created inside this folder will have the group automatically set to the group group .

That's what caused the files foo.txt and bar.txt to be created like so:

Permissions & umask

The permissions you're seeing are another matter. These are governed by the settings for your umask . You can see what your umask is set to with the command umask :

NOTE: these bits are also called "mode" bits.

It's a mask so it will disable any of the bits related to permissions which are enabled. In this example the only bit I want off is the write permissions for other.

The representation of the "bits" in this command are in decimal form. So a 2 equates to 010 in binary form, which is the write bit. A 4 (100) would mean you want read disabled. A 7 (111) means you want read/write/execute all disabled. Building it up from here:

Would disable the read/write/execute bits for other users.

So then what about your files?

Well the umask governs the permissions that will get set when a new file is created. So if we had the following umask set:

And started touching new files, we'd see them created like so:

If we changed it to something else, say this:

It won't have any impact on files that we've already created though. See here:

So then what's going on with the file browser?

The umask is what I'd called a "soft" setting. It is by no means absolute and can be by-passed fairly easily in Unix in a number of ways. Many of the tools take switches which allow you to specify the permissions as part of their operation.

With the -m switch we can override umask . The touch command doesn't have this facility so you have to get creative. See this U&L Q&A titled: Can files be created with permissions set on the command line? for just such methods.

Other ways? Just override umask . The file browser is most likely either doing this or just completely ignoring the umask and laying down the file using whatever permissions it's configured to do as.


A Public folder exists in your Home directory ( /home/user ) for sharing files with other users. If an other user wants to get access to this Public folder, the execute bit for the world should be set on the Home directory.

If you do not need to allow others to access your home folder (other humans or users like www-data for a webserver), you'll be fine with chmod o-rwx "$HOME" (remove read/write/execute from "other", equivalent to chmod 750 "$HOME" since the default permission is 750). Otherwise, you should change the umask setting too to prevent newly created files from getting read permissions for the world by default.

For a system-wide configuration, edit /etc/profile per-user settings can be configured in

/.profile . I prefer the same policy for all users, so I'd edit the /etc/profile file and append the line:

You need to re-login to apply these changes, unless you're in a shell. In that case, you can run umask 027 in the shell.

Now to fix the existing permissions, you need to remove the read/write/execute permissions from other:

Now if you decide to share the

/Public folder to everyone, run the next commands:

/Public -type f -exec chmod o+r <> - allow everyone to read the files in


1 Answer 1

The permissions granted by an ACL are additive, but perhaps you're expecting them to be recursive? (they aren't)

You can almost get what you want with ACLs. You need to start out by setting the ACL like above recursively on every file and directory in the tree. Be sure to include the default:group:mygroup:rwx setting on directories. Now, any new directory will get those settings automatically applied to it, and and new file in those directories likewise.

There are two times when this still fails:

  • when someone moves a file or directory from outside the tree. Since the inode already exists, it won't get the defaults set on it.
  • when someone extracts files from an archive using an ACL-aware program which overwrites the defaults.

I don't know any way to handle those two other than writing a cron job to periodically run chgrp -R mygroup DIRECTORY chmod g+rwx -R DIRECTORY . This may or may not be practical depending on the number of files in your shared directory.

Here's a slightly modified version of a script I use to fix ACLs on a tree of files. It completely overwrites any ACLs on anything in the tree with a specific list of read-write groups and read-only groups.


Java requirements

GeoServer is a software server written in Java, and as such it requires Java to be present in our environment. The process to install Java will differ according to our target server's architecture. However, in all cases, the first decision we must make is what version of Java to install and with which package. This is because Java is available in two main packages: Java Development Kit ( JDK ) and Java Runtime Environment ( JRE ). JDK, as the name suggests, is used to develop Java applications, while JRE is generally used to run Java applications (though JDK also contains JRE).

There are a number of different versions of Java available. However, the GeoServer project only supports the use of Java 6 (also known as Java&trade 1.6) or newer. The most recent version is Java 7 (also known as Java 1.7), and GeoServer can be run against this version of Java. The choice of whether to use Java 6 or 7 will largely be down to either personal preference or specific system limitations such as other software that have dependency on a version. For example, Tomcat 8.0 now requires the use of Java 7 as a minimum. The GeoServer documentation states that Java 7 offers the best performance, and so this is the version we will use.

The upcoming GeoServer 2.6 release will require JRE7 (1.7) as a minimum. At the time of writing, GeoServer 2.6 is at Release Candidate 1.

Prior to Version 2, GeoServer required JDK to be installed in order to work however, since Version 2, this is no longer a requirement, and GeoServer can run perfectly well using just JRE. The key to manage a successful production environment is to make sure there are no unnecessary software or components installed that might introduce vulnerabilities or increase the management overhead. For these reasons, JRE should be used to run GeoServer. The following sections will describe how to install Java to the Linux and Windows environments.

Installing Java on CentOS 6.3

A well-designed production environment will be as lean as possible in terms of the resources consumed and the overall system footprint one way to achieve this is to ensure that servers do not contain any more software than is absolutely necessary to deliver its intended function. So, in the case of a server being deployed to deliver mapping services, it should only contain the software necessary to deliver maps.

There are many different flavors of Linux available and all of them are capable of running GeoServer without any issues, after all, Java is cross-platform! The choice of Linux distribution is often either a personal one or a company policy-enforced one. There is a great deal of information available to install GeoServer on a Ubuntu distribution, but very little on installing on a CentOS distribution. CentOS is an enterprise-class distribution that closely follows the development of Red Hat Enterprise Linux, and it is a common installation in organizations. We will use CentOS 6.3, and in keeping with the philosophy of making sure that the server is lean, we will only use the minimal server installation.

By default, CentOS 6.3 comes preinstalled with OpenJDK 1.6 as a result of potential licensing conflicts with the distributing Oracle Java that's preinstalled. The GeoServer documentation states that OpenJDK will work with GeoServer, but there might be issues, particularly with respect to 2D rendering performances. While OpenJDK can be used to run GeoServer, it is worth noting that the project does not run tests of GeoServer against OpenJDK, which means that there is a potential risk of failure if it is used in production.

As mentioned previously, Oracle Java is not packaged for the CentOS platform, and thus we will need to install it ourselves using a generic package direct from Oracle. To download Java, visit the Oracle Technology Network website:

Perform the following steps:

Download the current version of JRE 7 for the Linux platform, choosing the *.rpm file from the download list. At the time of writing, this file is jre-7u51-linux-x64.rpm .

The eagle-eyed amongst you might spot that this file is for a 64-bit flavor of Linux. GeoServer can be installed on both 32-bit and 64-bit architectures however, installing to a 32-bit Linux architecture will require downloading the 32-bit version of the file, which at the time of writing is jre-7u51-linux-i586.rpm .

Once we download the package to our server, we need to install it.

Change to the directory where the package is downloaded and execute the following command:

This will result in JRE being unpacked and installed to the /usr/java directory. Within this directory, there is a symbolic link called latest , which links to the actual JRE install folder. This symbolic link can be used in place of the lengthier JRE directory name. It is best practice to use the latest link so that the future upgrades of JRE does not cause Java-based software to stop working due to broken references.

Next, we need to tell CentOS that we want it to use Oracle JRE instead of the preinstalled OpenJDK. To do this, we make use of the alternatives command to specify the flavor of Java to use:

This tells CentOS that any time the java command is used, it actually refers to the binary contained within the Oracle JRE directory and not the OpenJDK binary. The flavor of Java used by the system can be changed any time running the following command:

The alternatives command should present you with the following prompt:

Downloading the example code

You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

Depending on the number of programs configured to provide the java command, you will be presented with a list. The program that is currently responding to java is indicated by an asterisk.

In this case, Oracle JRE, which we just installed, is shown to be the active one. If Oracle JRE is not currently selected, then simply enter the number matching the /usr/java/latest/bin/java entry in your list.

An important thing to note here is the command entry for Oracle JRE. Notice how it matches the path that we used for the alternatives --install command. This is important as it means that we can now install future versions or updates of Oracle JRE without having to run the alternatives command again. Where possible, you should use the /usr/java/latest/bin/java path to reference Java, for example, the JAVA_HOME environment variable.

We can now test whether our system is using Oracle JRE issuing the following command:

If all goes well, we should see the following response:

Your version numbers might differ, but the rest should be the same most importantly, we do not want to see the word OpenJDK anywhere.

Installing Java on Windows Server 2008 R2 SP1

If you target Windows Server in your production environment, life is a little simpler than it is for the users of Linux. For the purposes of this book, we will use Windows Server 2008 R2 SP1 Standard Edition, however other versions of Windows Server that can have Java installed should also work fine.

Once again, we will adopt the best practice to use Oracle JRE, and again we will use Version 1.7. Go ahead and download the Windows package for JRE from Oracle's Technology Network website:

At this point, we have a decision to make about which JRE installer to download, 32-bit or 64-bit. Making the right decision now is important as the choice of 32-bit versus 64-bit will have consequences later when configuring GeoServer. In the next section, we will discuss the installation of Apache Tomcat, which has a dependency on Java, in order to run GeoServer.

In the Windows environment, the Apache Tomcat installer will automatically install a 32-bit or 64-bit Windows Service based on the installed Java. So, a 64-bit installation of Java will mean that the Apache Tomcat service will also be installed as 64-bit.

The three factors influencing the choice of a 32-bit or 64-bit Java are:

The architecture on which you run Windows

Java VM memory configuration considerations

The use of native JAI and JAI Image I/O extensions

Hopefully, the first reason is self-explanatory. If you have a 32-bit version of Windows installed, you can only install a 32-bit version of Java. If you have a 64-bit Windows installation, then you can choose between the two versions. We install to Windows Server 2008 R2 SP1, which is only available in 64-bit this means that the processor architecture or Windows is not a limitation. In this case, the decision now comes down to the memory configuration and use of native JAI and JAI Image I/O extensions.

The memory consideration is an important decision since a 32-bit process, irrespective of whether it runs on a 32-bit or 64-bit processor architecture, can only address a maximum of 2 GB memory. Therefore, if we want to maximize the available server memory, we will need to consider using the 64-bit version of Java. However, the JAI and JAI Image I/O extensions are only available on the Windows platform as 32-bit binaries. If we choose the 64-bit Java, then we will not be able to use the extensions, which can be an issue if we plan on using our server to provide predominantly raster datasets. The native JAI and JAI Image I/O extensions can provide a significant performance increase when performing raster operations, in other words, responding to WMS requests.

Getting the most out of a production environment is as much about maximizing resource utilization as anything else. If we have a server with lots of memory, we can use the 64-bit Java and allocate it a large chunk of memory, but then the only real advantage this provides is that it will allow us to do more concurrent raster operations. The maximum number of concurrent requests will still be limited by other factors, which might not be the most efficient use of server resources. An alternative approach is to scale-up by running multiple instances of GeoServer on the server. This is discussed in more detail later in this chapter. Scaling-up means that we can maximize the usage of server resources (memory) without compromising on our ability to utilize the native JAI and JAI Image I/O extensions.

To install the 32-bit version of Java, perform the following steps:

From the Oracle download page, choose the 32-bit Java installer, which at the time of writing is jre-7u51-windows-i586.exe , and save it to a local disk.

Open the folder where you saved the file, right-click on the file, and choose the Run as administrator menu item:

Accept all Windows UAC prompts that appear and wait for the Java installation wizard to open.

The installer will want to install Java to a default location, usually C:Program Files (x86)Javajre7 , but if you want to install it to a different folder, make sure to tick the Change destination folder checkbox placed at the bottom of the dialog:

Click on the Install button. If you did not tick the box to change the destination folder, then the installation will start.

If the changed destination checkbox was ticked, clicking on the Install button will prompt for the location to install to.

Specify the location you want to install to, and then click on the Next button the installation starts.

If the installation is successful, you will be greeted with the following screen:

Closing the installation wizard will launch a web browser where the installation of Java can be verified by the steps given on the page loaded after Java installation.


2 Answers 2

It's possible to set different group and user access for files and directories, and this will allow both Apache and your user1 user to edit what's in /var/www without requiring root/sudo and without making anything world-writable.

So, set the "user" permission inside /var/www to user1 . Set the "group" permission to www-data (but ONLY for the specific files or directories that the web server needs to write to).

You should avoid letting the web server write to the entire /var/www directory and its contents, instead giving the above group permission only to the specific files where this is necessary. It is a good security principle to limit the web server's access to write to files to only those files that it is strictly necessary for - and it is a good idea to try and ensure those files are not executed directly (aren't .php or other executable scripts, for example).


Apache won't index folder from another mount

I'm trying to enable directory listing for a folder outside the web root, from a different local ext4 mount that uses Basic Authentication, but I'm getting an empty list and no logged errors. What's strange is that if I put in the known location of a file under this directory in my browser, it downloads the file just fine.

Here's my example.conf file:

Also, I've commented IndexIgnore out in /etc/apache2/mods-enabled/autoindex.conf

I've run chmod -R 755 /blah1/blah2 , and chgrp -R www-data /blah1/blah2 and chmod a+x -R /blah1/blah2 . The folder owner is a member of www-data. If I run sudo usermod -a -G www-data myusername I can browse and read all files and folders just fine.

Doing some testing, my configuration works fine if I move /blah1/blah2 under my home directory and change the alias. There's something about it being on another mount that is messing up mod_autoindex, even though apache can clearly read the files themselves. Removing authentication doesn't help. With LogLevel warn I get no logged errors. After changing my LogLevel to trace4, here's my error log.

Here's the mount line from /etc/fstab :

EDIT Last note: confirming that www-data can read and write to my folder, I made the following php script:

The result: directory testdir is created with owner www-data:www-data, and the list of directories and files is dumped as a variable.

EDIT2 I've run the following commands to set permissions correctly:


1 Answer 1

You can achieve this using automount and the multiuser option for mount.cifs. Install the required packages:

The following example assumes that the cifs server exports a share that is named after the user that is accessing it. Normally that would be suitable for home directories.

Add this to your /etc/auto.master :

Make sure to replace server.domain by your file server. You could also use a fixed share this way. Just replace the * by a fixed name and also the & .

An important detail in the above configuration is the cruid=$ . It will make the kernel look for a kerberos ticket in the context of the user accessing the share. Otherwise it would be trying roots ticket cache.

If you have a kerberos ticket, it will mount the file system /cifs/$USER on first access. That means you need to explicitly type e. g. cd /cifs/myuser or a similar action in a GUI file browser. To avoid this you could place symbolic links pointing to this from somewhere else and tell users to access those.

If you are using a fixed share (not using * and & ) of course you would have to type cd /cifs/sharename .

Subsequent access by other users to the same share will be using their permissions, made possible by the multiuser option. No additional mount will be made but the existing one reused.

It is also possible to add the required automount maps to an LDAP server for central management, but this is probably beyond the scope of this answer.

In your question you asked for the mount to be mounted as root on boot. Technically this is done here in form of a place holder mount for autofs. Practically the real mount is only done on first access by a user.

We are using this setup for

100 clients at my workplace for accessing quite a big lustre file system and it works reliably.


Watch the video: 4612: Web GIS and Mapping Programming. leaflet. GeoServer. Web Map u0026 Graphs APIs