next up previous
Next: Implementation of DCE Up: Top Page Previous: Introduction

System Design

In 1995, we decided to use the DCE and DFS for KEKCC after long study of several technologies for software which provides a distributed computing system from the view points of security, scalability, user-friendliness and easiness of management, operation and maintenance. Many vendors are offering DCE products which were developed by themselves based on the source code provided by The Open Group; they can offer their own enhancement of DCE and DFS. Also, better support is expected compared with a third-party product(AFS etc.). Due to this reason we chose the DCE and DFS. The design of DCE and DFS promised higher scalability and availability, even for very large-scale distributed computing systems.

In the following, the design philosophy of KEKCC system is described:

  1. Multiple DCE cells
    This system is shared by many research groups. To enable each research group to run their cluster system with their policy, the hardware is configured to be independent among the cluster systems. We made one DCE cell for a research group. KEKCC thus comprises 10 DCE cells. The user account and home directory are managed separately in each cluster system.
  2. Inter-cell communication
    Inter-cell communication is used to access the other cluster system. Currently, it is mainly used for global system monitoring.
  3. Seamlessness
    We integrated the UNIX login and DCE login, e.g., xdm and ftp so that users can seamlessly migrate from the legacy systems to the DCE. Actually, users meet the DCE user interface only in rare cases, such as when changing the password or shell, specifying a full file path, and so on.
  4. Integrating HSM and DCE/DFS
    All files, including home directories and data files, are accessed through DFS. File systems in HSM have to be accessed through DFS so that the transparent migration is available from any client.


Figure 1: KEK Distributed Computer System Configuration

The configuration of the CPU server and data server of each research group is shown in Figure 1 and Table 1. The Hitachi 3500 is a SMP server machine equipped with a 120-MHz PA-7200. The operating system, HI-UX, was developed by Hitachi Ltd. All Hitachi 3500s are connected to each other with a Hitachi crossbar switch network:HS-Link, which accomplishes a 20MB/sec transmission rate at the physical level.

Experimental groups need a huge storage capacity to store their data on it. To answer this, the system is equipped with large-scale mass-storage systems, a SONY PetaSite tape system [1], which has 20TB capacity, and a 1TB RAID system.

Table 1. Configuration of major clusters

Figure 2 shows the hardware configuration of the central service. Simultaneously it makes a DCE cell.


Figure 2: Central System Configuration



next up previous
Next: Implementation of DCE Up: No Title Previous: Introduction

S. Yashiro, T. Sasaki, Y. Morita, T. Ishikawa, H. Mawatari, Y. Watase
N. Takashimizu, T. Itoh, Y. Kodama, Y. Munakata, S. Kohno, H. Itoh
Tue May 20 12:18:02 JST 1997