Skip to content

Commit 7e676db

Browse files
committed
doc: update release_2.2 branch documentation
Update documentaiton in the release_2.2 branch with changes made after tagged for code freeze Signed-off-by: David B. Kinder <[email protected]>
1 parent 3b6b5fb commit 7e676db

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

61 files changed

+907
-511
lines changed

doc/develop.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,7 @@ Service VM Tutorials
3333
:maxdepth: 1
3434

3535
tutorials/running_deb_as_serv_vm
36+
tutorials/using_yp
3637

3738
User VM Tutorials
3839
*****************
@@ -72,6 +73,7 @@ Enable ACRN Features
7273
tutorials/acrn_on_qemu
7374
tutorials/using_grub
7475
tutorials/pre-launched-rt
76+
tutorials/enable_ivshmem
7577

7678
Debug
7779
*****

doc/developer-guides/hld/hld-emulated-devices.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,4 +22,4 @@ documented in this section.
2222
Hostbridge emulation <hostbridge-virt-hld>
2323
AT keyboard controller emulation <atkbdc-virt-hld>
2424
Split Device Model <split-dm>
25-
Shared memory based inter-vm communication <ivshmem-hld>
25+
Shared memory based inter-VM communication <ivshmem-hld>

doc/developer-guides/hld/hld-overview.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ ACRN high-level design overview
55

66
ACRN is an open source reference hypervisor (HV) that runs on top of Intel
77
platforms (APL, KBL, etc) for heterogeneous scenarios such as the Software Defined
8-
Cockpit (SDC), or the In-Vehicle Experience (IVE) for automotive, or HMI & Real-Time OS for industry. ACRN provides embedded hypervisor vendors with a reference
8+
Cockpit (SDC), or the In-Vehicle Experience (IVE) for automotive, or HMI & real-time OS for industry. ACRN provides embedded hypervisor vendors with a reference
99
I/O mediation solution with a permissive license and provides auto makers and
1010
industry users a reference software stack for corresponding use.
1111

@@ -124,7 +124,7 @@ ACRN 2.0
124124
========
125125

126126
ACRN 2.0 is extending ACRN to support pre-launched VM (mainly for safety VM)
127-
and Real-Time (RT) VM.
127+
and real-time (RT) VM.
128128

129129
:numref:`overview-arch2.0` shows the architecture of ACRN 2.0; the main difference
130130
compared to ACRN 1.0 is that:

doc/developer-guides/hld/hld-security.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1016,7 +1016,7 @@ access is like this:
10161016
#. If the verification is successful in eMMC RPMB controller, then the
10171017
data will be written into storage device.
10181018

1019-
This work flow of authenticated data read is very similar to this flow
1019+
This workflow of authenticated data read is very similar to this flow
10201020
above, but in reverse order.
10211021

10221022
Note that there are some security considerations in this design:

doc/developer-guides/hld/hld-virtio-devices.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -358,7 +358,7 @@ general workflow of ioeventfd.
358358
:align: center
359359
:name: ioeventfd-workflow
360360

361-
ioeventfd general work flow
361+
ioeventfd general workflow
362362

363363
The workflow can be summarized as:
364364

doc/developer-guides/hld/hv-ioc-virt.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ SoC and back, as well as signals the SoC uses to control onboard
1313
peripherals.
1414

1515
.. note::
16-
NUC and UP2 platforms do not support IOC hardware, and as such, IOC
16+
Intel NUC and UP2 platforms do not support IOC hardware, and as such, IOC
1717
virtualization is not supported on these platforms.
1818

1919
The main purpose of IOC virtualization is to transfer data between
@@ -131,7 +131,7 @@ There are five parts in this high-level design:
131131
* State transfer introduces IOC mediator work states
132132
* CBC protocol illustrates the CBC data packing/unpacking
133133
* Power management involves boot/resume/suspend/shutdown flows
134-
* Emulated CBC commands introduces some commands work flow
134+
* Emulated CBC commands introduces some commands workflow
135135

136136
IOC mediator has three threads to transfer data between User VM and Service VM. The
137137
core thread is responsible for data reception, and Tx and Rx threads are

doc/developer-guides/hld/hv-partitionmode.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -57,8 +57,8 @@ configuration and copies them to the corresponding guest memory.
5757
.. figure:: images/partition-image18.png
5858
:align: center
5959

60-
ACRN set-up for guests
61-
**********************
60+
ACRN setup for guests
61+
*********************
6262

6363
Cores
6464
=====

doc/developer-guides/hld/hv-rdt.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ resource allocator.) The user can check the cache capabilities such as cache
3939
mask and max supported CLOS as described in :ref:`rdt_detection_capabilities`
4040
and then program the IA32_type_MASK_n and IA32_PQR_ASSOC MSR with a
4141
CLOS ID, to select a cache mask to take effect. These configurations can be
42-
done in scenario xml file under ``FEATURES`` section as shown in the below example.
42+
done in scenario XML file under ``FEATURES`` section as shown in the below example.
4343
ACRN uses VMCS MSR loads on every VM Entry/VM Exit for non-root and root modes
4444
to enforce the settings.
4545

@@ -52,7 +52,7 @@ to enforce the settings.
5252
<CLOS_MASK desc="Cache Capacity Bitmask">0xF</CLOS_MASK>
5353
5454
Once the cache mask is set of each individual CPU, the respective CLOS ID
55-
needs to be set in the scenario xml file under ``VM`` section. If user desires
55+
needs to be set in the scenario XML file under ``VM`` section. If user desires
5656
to use CDP feature, CDP_ENABLED should be set to ``y``.
5757

5858
.. code-block:: none
@@ -106,7 +106,7 @@ that corresponds to each CLOS and then setting IA32_PQR_ASSOC MSR with CLOS
106106
users can check the MBA capabilities such as mba delay values and
107107
max supported CLOS as described in :ref:`rdt_detection_capabilities` and
108108
then program the IA32_MBA_MASK_n and IA32_PQR_ASSOC MSR with the CLOS ID.
109-
These configurations can be done in scenario xml file under ``FEATURES`` section
109+
These configurations can be done in scenario XML file under ``FEATURES`` section
110110
as shown in the below example. ACRN uses VMCS MSR loads on every VM Entry/VM Exit
111111
for non-root and root modes to enforce the settings.
112112

@@ -120,7 +120,7 @@ for non-root and root modes to enforce the settings.
120120
<MBA_DELAY desc="Memory Bandwidth Allocation delay value">0</MBA_DELAY>
121121
122122
Once the cache mask is set of each individual CPU, the respective CLOS ID
123-
needs to be set in the scenario xml file under ``VM`` section.
123+
needs to be set in the scenario XML file under ``VM`` section.
124124

125125
.. code-block:: none
126126
:emphasize-lines: 2

doc/developer-guides/hld/ivshmem-hld.rst

Lines changed: 17 additions & 92 deletions
Original file line numberDiff line numberDiff line change
@@ -15,17 +15,24 @@ Inter-VM Communication Overview
1515
:align: center
1616
:name: ivshmem-architecture-overview
1717

18-
ACRN shared memory based inter-vm communication architecture
18+
ACRN shared memory based inter-VM communication architecture
1919

20-
The ``ivshmem`` device is emulated in the ACRN device model (dm-land)
21-
and its shared memory region is allocated from the Service VM's memory
22-
space. This solution only supports communication between post-launched
23-
VMs.
20+
There are two ways ACRN can emulate the ``ivshmem`` device:
2421

25-
.. note:: In a future implementation, the ``ivshmem`` device could
26-
instead be emulated in the hypervisor (hypervisor-land) and the shared
27-
memory regions reserved in the hypervisor's memory space. This solution
28-
would work for both pre-launched and post-launched VMs.
22+
``ivshmem`` dm-land
23+
The ``ivshmem`` device is emulated in the ACRN device model,
24+
and the shared memory regions are reserved in the Service VM's
25+
memory space. This solution only supports communication between
26+
post-launched VMs.
27+
28+
``ivshmem`` hv-land
29+
The ``ivshmem`` device is emulated in the hypervisor, and the
30+
shared memory regions are reserved in the hypervisor's
31+
memory space. This solution works for both pre-launched and
32+
post-launched VMs.
33+
34+
While both solutions can be used at the same time, Inter-VM communication
35+
may only be done between VMs using the same solution.
2936

3037
ivshmem hv:
3138
The **ivshmem hv** implements register virtualization
@@ -98,89 +105,7 @@ MMIO Registers Definition
98105
Usage
99106
*****
100107

101-
To support two post-launched VMs communicating via an ``ivshmem`` device,
102-
add this line as an ``acrn-dm`` boot parameter::
103-
104-
-s slot,ivshmem,shm_name,shm_size
105-
106-
where
107-
108-
- ``-s slot`` - Specify the virtual PCI slot number
109-
110-
- ``ivshmem`` - Virtual PCI device name
111-
112-
- ``shm_name`` - Specify a shared memory name. Post-launched VMs with the
113-
same ``shm_name`` share a shared memory region.
114-
115-
- ``shm_size`` - Specify a shared memory size. The two communicating
116-
VMs must define the same size.
117-
118-
.. note:: This device can be used with Real-Time VM (RTVM) as well.
119-
120-
Inter-VM Communication Example
121-
******************************
122-
123-
The following example uses inter-vm communication between two Linux-based
124-
post-launched VMs (VM1 and VM2).
125-
126-
.. note:: An ``ivshmem`` Windows driver exists and can be found `here <https://github.com/virtio-win/kvm-guest-drivers-windows/tree/master/ivshmem>`_
127-
128-
1. Add a new virtual PCI device for both VMs: the device type is
129-
``ivshmem``, shared memory name is ``test``, and shared memory size is
130-
4096 bytes. Both VMs must have the same shared memory name and size:
131-
132-
- VM1 Launch Script Sample
133-
134-
.. code-block:: none
135-
:emphasize-lines: 7
136-
137-
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
138-
-s 2,pci-gvt -G "$2" \
139-
-s 5,virtio-console,@stdio:stdio_port \
140-
-s 6,virtio-hyper_dmabuf \
141-
-s 3,virtio-blk,/home/acrn/uos1.img \
142-
-s 4,virtio-net,tap0 \
143-
-s 6,ivshmem,test,4096 \
144-
-s 7,virtio-rnd \
145-
--ovmf /usr/share/acrn/bios/OVMF.fd \
146-
$vm_name
147-
148-
149-
- VM2 Launch Script Sample
150-
151-
.. code-block:: none
152-
:emphasize-lines: 5
153-
154-
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
155-
-s 2,pci-gvt -G "$2" \
156-
-s 3,virtio-blk,/home/acrn/uos2.img \
157-
-s 4,virtio-net,tap0 \
158-
-s 5,ivshmem,test,4096 \
159-
--ovmf /usr/share/acrn/bios/OVMF.fd \
160-
$vm_name
161-
162-
2. Boot two VMs and use ``lspci | grep "shared memory"`` to verify that the virtual device is ready for each VM.
163-
164-
- For VM1, it shows ``00:06.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)``
165-
- For VM2, it shows ``00:05.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)``
166-
167-
3. Use these commands to probe the device::
168-
169-
$ sudo modprobe uio
170-
$ sudo modprobe uio_pci_generic
171-
$ sudo echo "1af4 1110" > /sys/bus/pci/drivers/uio_pci_generic/new_id
172-
173-
4. Finally, a user application can get the shared memory base address from
174-
the ``ivshmem`` device BAR resource
175-
(``/sys/class/uio/uioX/device/resource2``) and the shared memory size from
176-
the ``ivshmem`` device config resource
177-
(``/sys/class/uio/uioX/device/config``).
178-
179-
The ``X`` in ``uioX`` above, is a number that can be retrieved using the
180-
``ls`` command:
181-
182-
- For VM1 use ``ls -lh /sys/bus/pci/devices/0000:00:06.0/uio``
183-
- For VM2 use ``ls -lh /sys/bus/pci/devices/0000:00:05.0/uio``
108+
For usage information, see :ref:`enable_ivshmem`
184109

185110
Inter-VM Communication Security hardening (BKMs)
186111
************************************************

doc/developer-guides/hld/system-timer-hld.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ I/O ports definition::
8686
RTC emulation
8787
=============
8888

89-
ACRN supports RTC (Real-Time Clock) that can only be accessed through
89+
ACRN supports RTC (real-time clock) that can only be accessed through
9090
I/O ports (0x70 and 0x71).
9191

9292
0x70 is used to access CMOS address register and 0x71 is used to access

0 commit comments

Comments
 (0)