Search the FAQ Archives

3 - A - B - C - D - E - F - G - H - I - J - K - L - M
N - O - P - Q - R - S - T - U - V - W - X - Y - Z
faqs.org - Internet FAQ Archives

Comp.os.research: Frequently answered questions [3/3: l/m 13 Aug 1996]
Section - [1.5.3] Transfer and caching granularity

( Part1 - Part2 - Part3 - Single Page )
[ Usenet FAQs | Web FAQs | Documents | RFC Index | Forum ]


Top Document: Comp.os.research: Frequently answered questions [3/3: l/m 13 Aug 1996]
Previous Document: [1.5.2] Access synchronisation
Next Document: [1.5.4] Address space structure
See reader questions & answers on this topic! - Help others by sharing your knowledge
From: Distributed systems

When caching objects in local memory, it is necessary to decide what
level of granularity to use.  All current systems use a fixed block
size in the cache, rather than varying the granularity based on object
size.  Usually this is due to constraints imposed by the system
hardware and memory management.

The choice of the block size in the cache depends on several issues.

- Cost of communication: for example, on many local area networks
  there is little difference between the time required to send a
  one-byte message and that required to send a 1024-byte message.
  Transmitting bulk changes rather than single-byte modifications
  would therefore seem desirable.

- The choice of granularity also depends on the locality of reference
  in the application, as thrashing may occur when two machines are
  both accessing the same block (this is also known as the `ping-pong
  effect').  This would seem to argue for a smaller block size.  It
  should be noted that many object-oriented systems exhibit very poor
  locality of reference.

In practice, a compromise must be achieved, as with conventional
virtual memory systems.  Most systems use a block size which is the
same as that of the virtual memory management unit on the system, or a
multiple thereof.  Among other things, it allows the hardware to be
used to help in the maintenance of consistency.  The choice is
complicated somewhat when heterogeneous machines are being used, but
in these cases, the lowest common multiple of hardware supported page
sizes can usually be used.

The only major system that doesn't use a large block size is Memnet,
in which a hardware based DSM system was implemented on a high speed
token ring; a 32-byte block size was used instead [Delp & Farber].
The choice of a small block size is appropriate, as the system is much
closer to a shared memory multi-processor than it is to a software DSM
system.  This is because the entire processor is blocked on a cache
miss; the processor is not actually aware of the distributed nature of
its address space.  Also, the ratio between remote and local memory
access times is much lower than in the software based systems due to
the dedicated token ring (200Mbps) and hardware assistance.

User Contributions:

1
Sep 24, 2021 @ 7:07 am
buy zithromax online https://zithromaxazitromycin.com/ - buy zithromax online zithromax online https://zithromaxazitromycin.com/ - buy zithromax

Comment about this article, ask questions, or add new information about this topic:




Top Document: Comp.os.research: Frequently answered questions [3/3: l/m 13 Aug 1996]
Previous Document: [1.5.2] Access synchronisation
Next Document: [1.5.4] Address space structure

Part1 - Part2 - Part3 - Single Page

[ Usenet FAQs | Web FAQs | Documents | RFC Index ]

Send corrections/additions to the FAQ Maintainer:
os-faq@cse.ucsc.edu





Last Update March 27 2014 @ 02:12 PM