The Virtualization Infrastructure Driver Vid Is Not Running Server 2012
SUSE Linux Enterprise Server12
This document provides guidance and an overview to high level general features and updates for SUSE Linux Enterprise Server 12. Besides architecture or product-specific information, it also describes the capabilities and limitations of SLES 12. General documentation may be found at: http://www.suse.com/documentation/sles-12/.
Last week, Microsoft announced the final release of Windows Server 2016 (the bits can be downloaded here). In addition, Microsoft has announced that Windows Server.
- 1 SUSE Linux Enterprise Server
- 2 Installation and Upgrade
- 3 Infrastructure, Package and Architecture Specific Information
- 4 AMD64/Intel64 64-Bit (x86_64) Specific Information
- 5 POWER (ppc64le) Specific Information
- 6 System z (s390x) Specific Information
- 7 Driver Updates
- 8 Packages and Functionality Changes
- 9 Technical Information
- 10 Legal Notices
- 11 Colophon
SUSE Linux Enterprise Server is a highly reliable, scalable, and secure server operating system, built to power mission-critical workloads in both physical and virtual environments. It is an affordable, interoperable, and manageable open source foundation. With it, enterprises can cost-effectively deliver core business services, enable secure networks, and simplify the management of their heterogeneous IT infrastructure, maximizing efficiency and value.
The only enterprise Linux recommended by Microsoft and SAP, SUSE Linux Enterprise Server is optimized to deliver high-performance mission-critical services, as well as edge of network, and web infrastructure workloads.
Designed for interoperability, SUSE Linux Enterprise Server integrates into classical Unix as well as Windows environments, supports open standard interfaces for systems management, and has been certified for IPv6 compatibility.
This modular, general purpose operating system runs on three processor architectures and is available with optional extensions that provide advanced capabilities for tasks such as real time computing and high availability clustering.
SUSE Linux Enterprise Server is optimized to run as a high performing guest on leading hypervisors and supports an unlimited number of virtual machines per physical system with a single subscription, making it the perfect guest operating system for virtual computing.
SUSE Linux Enterprise Server is backed by award-winning support from SUSE, an established technology leader with a proven history of delivering enterprise-quality support services.
SUSE Linux Enterprise Server 12 has a 13 years life cycle, with 10 years of General Support and 3 years of Extended Support. The current version (GA) will be fully maintained and supported until 6 months after the release of SUSE Linux Enterprise Server 12 SP1. If you need additional time to design, validate and test your upgrade plans, Long Term Service Pack Support can extend the support you get an additional 12 to 36 months in twelve month increments, giving you a total of 3 to 5 years of support on any given service pack.
For more information, check our Support Policy page https://www.suse.com/support/policy.html or the Long Term Service Pack Support Page https://www.suse.com/support/programs/long-term-service-pack-support.html.
Note: Fix Status of the GNU Bourne Again Shell (bash)
Given the proximity of the SUSE Linux Enterprise 12 release to the publication of the “shellshock” series of vulnerabilities in the GNU Bourne Again Shell (bash), we want to provide customers with information on the fix status of the bash version shipped in the SLE 12 GA release:
CVE-2014-6271 (original shellshock)
CVE-2014-7169 (taviso bug)
CVE-2014-7186 (redir_stack bug)
CVE-2014-7187 and
non-exploitable CVE-2014-6277
non-exploitable CVE-2014-6278
Up-to-date information is available online: https://www.suse.com/support/shellshock/ (https://www.suse.com/support/shellshock/).
SUSE Linux Enterprise Server 12 introduces a number of innovative changes. Here are some of the highlights:
Robustness on administrative errors and improved management capabilities with full system rollback based on btrfs as the default file system for the operating system partition and SUSE's snapper technology.
An overhaul of the installer introduces a new workflow that allows you to register your system and receive all available maintenance updates as part of the installation.
SUSE Linux Enterprise Server Modules offer a choice of supplemental packages, ranging from tools for Web Development and Scripting, through a Cloud Management module, all the way to a sneak preview of SUSE's upcoming management tooling called Advanced Systems Management. Modules are part of your SUSE Linux Enterprise Server subscription, are technically delivered as online repositories, and differ from the base of SUSE Linux Enterprise Server only by their lifecycle.
New core technologies like systemd (replacing the time honored System V based init process) and wicked (introducing a modern, dynamic network configuration infrastructure).
The open source database system MariaDB is fully supported now.
Support for the open-vm-tools together with VMware for better integration into VMware based hypervisor environments.
Linux Containers are integrated into the virtualization management infrastructure (lib-virt). Docker is fully supported now.
Support for the 64 bit Little-Endian variant of IBM's POWER architecture, in addition to continued support for the Intel 64 / AMD64 and IBM System z architectures.
GNOME 3.10 (or just GNOME 3), giving users a modern desktop environment with a choice of several different look and feel options, including a special SUSE Linux Enterprise Classic mode for easier migration from earlier SUSE Linux Enterprise desktop environments
For users wishing to use the full range of productivity applications of a Desktop with their SUSE Linux Enterprise Server, we are now offering the SUSE Linux Enterprise Workstation Extension (needs a SUSE Linux Enterprise Desktop subscription).
Integration with the new SUSE Customer Center, SUSE's central web portal to manage Subscriptions, Entitlements, and provide access to Support.
For users upgrading from a previous SUSE Linux Enterprise Server release it is recommended to review:
Read the READMEs on the media.
Get the detailed changelog information about a particular package from the RPM (with filename <FILENAME>):
Check the
ChangeLog
file in the top level of the media for a chronological log of all changes made to the updated packages.Find more information in the
docu
directory of the media of SUSE Linux Enterprise Server 12 . This directory includes PDF versions of the SUSE Linux Enterprise Server 12 Installation Quick Start and Deployment Guides. Documentation (if installed) is available below the/usr/share/doc/
directory of an installed system.These Release Notes are identical across all architectures, and the most recent version is always available online at http://www.suse.com/releasenotes/. Some entries are listed twice, if they are important and belong to more than one section.
http://www.suse.com/documentation/sles-12/ contains additional or updated documentation for SUSE Linux Enterprise Server 12 .
Find a collection of White Papers in the SUSE Linux Enterprise Server Resource Library at https://www.suse.com/products/server/resource-library/?ref=b#WhitePapers.
This SUSE product includes materials licensed to SUSE under the GNU General Public License (GPL). The GPL requires SUSE to provide the source code that corresponds to the GPL-licensed material. The source code is available for download at http://www.suse.com/download-linux/source-code.html. Also, for up to three years after distribution of the SUSE product, upon request, SUSE will mail a copy of the source code. Requests should be sent by e-mail to mailto:sle_source_request@suse.com or as otherwise instructed at http://www.suse.com/download-linux/source-code.html. SUSE may charge a reasonable fee to recover distribution costs.
1.4 Support Statement for SUSE Linux Enterprise Server#
To receive support, customers need an appropriate subscription with SUSE; for more information, see http://www.suse.com/products/server/services-and-support/.
The following definitions apply:
Problem determination, which means technical support designed to provide compatibility information, usage support, on-going maintenance, information gathering and basic troubleshooting using available documentation.
Problem isolation, which means technical support designed to analyze data, duplicate customer problems, isolate problem area and provide resolution for problems not resolved by Level 1 or alternatively prepare for Level 3.
Problem resolution, which means technical support designed to resolve problems by engaging engineering to resolve product defects which have been identified by Level 2 Support.
For contracted customers and partners, SUSE Linux Enterprise Server 12 and its Modules are delivered with L3 support for all packages, except the following:
Technology Previews
sound, graphics, fonts and artwork
packages that require an additional customer contract
packages provided as part of the Software Development Kit (SDK)
SUSE will only support the usage of original (e.g., unchanged or un-recompiled) packages.
1.4.1.1 Wayland Libraries Are Not Supported on SLES 12 GA and SP1#
Wayland is not supported on SLES 12 GA and SLES 12 SP1. While some Wayland libraries are available, they should not be installed and are not supported by SUSE.
The MariaDB open source database replaces the MySQL database system.
To retain compatibility with existing (MySQL based) deployments and dependencies, MariaDB
is using the name libmysql.so
for shared libraries. Thus, according to the SUSE and openSUSE Shared Library Policy the RPMs for the MariaDB shared libraries are called libmysql
.
For more information about the SUSE and openSUSE Shared Library Policy, see http://en.opensuse.org/openSUSE:Shared_library_packaging_policy (http://en.opensuse.org/openSUSE:Shared_library_packaging_policy).
IBM Java 6 will be supported as part of SLES 12 until September 2017 on the x86_64 and System z architectures.
Technology previews are packages, stacks, or features delivered by SUSE. These features are not supported. They may be functionally incomplete, unstable or in other ways not suitable for production use. They are mainly included for customer convenience and give customers a chance to test new technologies within an enterprise environment.
Whether a technical preview will be moved to a fully supported package later, depends on customer and market feedback. A technical preview does not automatically result in support at a later point in time. Technical previews could be dropped at any time and SUSE is not committed to provide a technical preview later in the product cycle.
Please, give your SUSE representative feedback, including your experience and use case.
openJDK is available as a technical preview.
sle2docker
is a convenience tool which creates SUSE Linux Enterprise images for Docker. The tool relies on KIWI and Docker itself to build the images. Packages can be fetched either from SUSE Customer Center (SCC) or from a local Subscription Management Tool (SMT).
Hot-add memory is currently only supported on the following hardware:
certified systems based on recent Intel Xeon Architecture,
Fujitsu PRIMEQUEST 2000 series
If your specific machine is not listed, call SUSE support to confirm whether or not your machine has been successfully tested. Also, regularly check our maintenance update information, which will explicitly mention the general availability of this feature.
The virtio-blk-data-plane
is a new experimental performance feature for KVM. It provides a streamlined block I/O path, which favors performance over functionality.
VMCS Shadowing is a new VT-x feature that allows software in VMX non-root operation to execute the VMREAD and VMWRITE instructions. Such executions do not read from the current VMCS (the one supporting VMX non-root operation) but instead from a shadow VMCS. This feature will help improve nested virtualization performance. VMCS shadowing is provided as technology preview.
The experimental QEMU TPM passthrough feature should not be used in environments where non-root access is grated to the host. To enable TPM passthrough, the following actions must be taken in addition to allocating the device in the guest domain xml:
The guest must pass
tpm_tis.force=1
on the guest kernel command line. This may be done via the bootloader configuration, typically found in/boot/grub2/grub.cfg
. Be aware that YaST autogenerates this configuration file. Thus better use the Kernel Parameter tab of the YaST Boot Loader dialog to appendtpm_tis.force=1
to the kernel command line parameters, or edit/etc/default/grub
and then rungrub2-mkconfig -o /boot/grub2/grub.cfg
.The host administrator must
chmod o+w /sys/class/misc/tpm0/device/cancel
. As this permits host-wide access to cancel TPM commands by unprivileged users, no unprivileged users must be permitted to access the host when it is put into this configuration. It is anticipated that future versions of libvirt will perform the privileged access of/sys/class/misc/tpm0/device/cancel
on QEMU's behalf such that permitting world write access to/sys/class/misc/tpm0/device/cancel
will not be necessary.
Usually, when a system's physical memory is exceeded, the system moves some memory onto reserved space on a hard drive, called 'swap' space. This frees physical memory space for additional use. However, this process of 'swapping' memory onto (and back from) a hard drive is much, much slower than direct memory access, so it can slow down the entire system.
Starting with SLES 12, you can enable the zswap
driver using the boot parameter zswap.enabled=1
. The zswap
driver inserts itself between the system and the swap hard drive, and instead of writing memory to a hard drive, it compresses memory. This speeds up both writing to swap and reading from swap, which results in better overall system performance while using swap.
Compression Limits
The effective compression ratio cannot exceed 50 percent, that is, it can at most store two uncompressed pages in one compressed page.If the workload's compression ratio exceeds 50% for all pages, zswap
will not be able to save any memory.
Setting zswap
Memory
Compressed memory still uses a certain amount of memory, so zswap
has a limit to the amount of memory which will be stored compressed, which is controllable through the file /sys/module/zswap/parameters/max_pool_percent
. By default, this is set to 20
, which indicates zswap
will use 20 percent of the total system physical memory to store compressed memory.
The zswap
memory limit has to be carefully configured. Setting the limit too high can lead to premature out-of-memory situations that would not exist without zswap
, if the memory is filled by non-swappable non-reclaimable pages. This includes mlocked memory and pages locked by drivers and other kernel users.
For the same reason, performance can also be hurt by compression/decompression if the current workload's workset would fit in, for example, 90 percent of the available RAM, but 20 percent of RAM is already occupied by zswap
. This means that the missing 10 percent of uncompressed RAM would constantly be swapped out of/in to the memory area compressed by zswap
, while the rest of the memory compressed by zswap
would hold pages that were swapped out earlier which are currently unused. There is no mechanism that would result in gradual writeback of those unused pages to let the uncompressed memory grow.
Freeing zswap
Memory
zswap
will only free its pages in two situations:
The processes using the pages free the pages or exit
The configured memory limit for
zswap
is exceeded. In this case, the oldestzswap
pages are written back to disk-based swap (that is, LRU).
Memory Allocation Issues
In theory, it can happen that zswap
is not yet exceeding its memory limit, but already fails to allocate memory to store compressed pages. In that case, it will refuse to compress any new pages and they will be swapped to disk immediately. For confirmation whether this issue is occurring, check the value of /sys/kernel/debug/zswap/reject_alloc_fail
.
Multi-core systems with fast solid state storage were unable to take advantage of the storage hardware speed to full extent. This especially demonstrated itself as a lock contention in the kernel block layer.
A new multi-queue block layer extension now helps to reach the maximum hardware speed with multiple hardware dispatch queue devices. This multi-queue block layer extension is offered as a technology preview.
If Xen is booted with the vpmu=1
parameter, perf can be used within a PVHVM guest to identify the source of performance problems.
virt-sandbox
provides a way for the administrator to run a command under a confined virtual machine using qemu/kvm or LXC libvirt virtualization drivers. The default sandbox domain only allows applications the ability to read and write stdin, stdout, and file descriptors handed to it. It is not allowed to open any other files. Enable SELinux on your system to get it usable. For more information, see http://sandbox.libvirt.org/ (http://sandbox.libvirt.org/quickstart/#System_service_sandboxes).
High swapping activity on Linux system, for example when triggering a file system backup, although the SAP applications are sized to completely fit into the system's main memory. This results in bad response times on the application level.
The pagecache_limit
feature is only supported as part of SUSE Linux Enterprise Server for SAP Applications.
You can limit the amount of page cache that the kernel uses if there is competition between application memory and page cache. Once the page cache is filled to the configured limit, application memory is more important and should not be paged out.
Two new Linux kernel tunables have been introduced:
vm.pagecache_limit_mb
(/proc/sys/vm/pagecache_limit_mb)vm.pagecache_limit_ignore_dirty
(/proc/sys/vm/pagecache_limit_ignore_dirty)
No pages will be paged out if the memory footprint of the workload plus the configured page cache limit do not exceed the amount of physical RAM in the system. If paging needs to occur, the Linux kernel will still favor to keep application memory over page cache unless we are below the page cache limit.
If there is plenty of free memory, the kernel will continue to use it as page cache in order to speed up file system operations.
Linux has managed to unify the Operating System layer nicely across different architectures. This challenge still exists in the hypervisor space.
KVM solves the universal hypervisor challenge. It is now available across all targets that SLES supports. KVM allows the administrator to create virtual machines in the exact same fashion using the exact same set of tools on x86_64, s390x and ppc64le.
This makes SLES the perfect platform for virtualization and cloud scenarios in heterogeneous environments.
Kdump for System z is included as technical preview.
Using Linux and virtualization technologies on System z, with good Linux and KVM skills, but limited knowledge of System z and z/VM.
KVM is included on the s390x platform as a technology preview.
Running Linux with KVM in an LPAR allows x86 skilled administrators to explore the potential of Linux on the mainframe. KVM on Linux allows the administrator to create and manage virtual machines by himself, assign resources and benefit from the workload isolation and protection, as well as the flexibility of KVM based virtual machines, with the same tools and commands as know from a x86 based environment.
Over time, business requirements may increase the need and interest to explore the full potential of the underlying platform. This can be achieved by getting more and more insight into the unique hardware and performance characteristics of System z, as well as the option to operate other environments on the mainframe, also in collaboration with Linux.
1.4.2.13.3 System z Performance Counters in perf Tool#
Enables the Linux perf tool to use hardware counters for improved performance measurements. It provides three sets of counters: the basic-, problem state-, and crypto-activity counter sets.
1.4.2.13.4 Disk mirroring with real-time enhancement for System z#
This functionality is currently included as technology preview in SLES 12
1.4.2.13.5 Hot-patching Support for Linux on System z Binaries#
Hot-patch support in gcc implements support for online patching of multi-threaded code for Linux on System binaries. It is possible to select specific functions for hot-patching using a function attribute
and to enable hot-patching for all functions ( -mhotpatch
) via command line option. Because enabling of hot-patching has negative impact on software size and performance it is recommended to use hot-patching for specific functions and not to enable hot-patch support in general.
For online documentation, see http://gcc.gnu.org/onlinedocs/gcc/ (http://gcc.gnu.org/onlinedocs/gcc/).
Provides improved monitoring and service via more timely and accurate display of settings and values via the ethtool
when running on hardware that supports the improved query of network cards.
1.4.2.13.7 Linux Support for Flash and Concurrent Flash MCL Updates#
System z integrated Flash and Flash concurrent hardware microcode level upgrades (MCL) without impacting I/O operations to the Flash storage media that also notifies users of the changed Flash hardware service level.
1.4.2.13.8 PCI infrastructure enablement for IBM System z#
This feature provides prerequisites for the System z specific PCI support.
1.4.2.13.9 snIPL Interface to Control Dynamic CPU Capacity#
Remote control of the capacity of target systems in high available configurations, allows to maintain the bandwidth during failure situation, and removes the need for keeping unused capacity activated during normal operation.
Provide infrastructure to gather and display OSA and TCP/IP configuration information via the OSA Query Address Table hardware function to ease administration of OSA and TCP/IP configuration information.
The following packages require additional support contracts to be obtained by the customer in order to receive full support:
PostgreSQL Database
SUSE Linux Enterprise High Availability Extension (https://www.suse.com/products/highavailability/) — With the SUSE Linux Enterprise High Availability Extension 12, SUSE offers the most modern open source High Availability Stack for Mission Critical environments.
SUSE provides a Software Development Kit (SDK) for SUSE Linux Enterprise 12. This SDK contains libraries, development environments, and tools along the following patterns:
C/C++ Development
Certification
SUSE Linux Enterprise Server 12 has been submitted to the certification bodies for:
FIPS 140-2 validation, see: http://csrc.nist.gov/groups/STM/cmvp/documents/140-1/140InProcess.pdf
For more information about certification please refer to https://www.suse.com/security/certificates.html.
SUSE Linux Enterprise conforms with Unicode 3.0 or higher, and thus it will be GB18030 compliant.
Unicode 3.0 has been supported by glibc
since version 2.2. and currently SUSE Linux Enterprise uses a much newer version of glibc
, so it is GB18030 compliant.
SUSE Linux Enterprise Server can be deployed in three ways:
Physical Machine,
Virtual Host,
Virtual Machine in paravirtualized environments.
2.1.1 Avoid Adding Packages When Activating a Module Repository#
When adding a module repository such as Public Cloud the graphical installer (YaST Qt UI) automatically selects recommended packages. Often this is not expected by the user.
To work around this behavior, disable the installation of recommended packages in the installer (YaST Qt UI) or use the text-mode installer (YaST ncurses UI) that by default does not autoinstall recommended packages ('Install Recommended Packages for Already Installed Packages' is deactivated).
2.1.2 Installing with LVM2, Without a Separate /boot Partition#
SUSE Linux Enterprise 12 generally supports the installation with a linear LVM2 without a separate /boot
partition, for example to use it with Btrfs as the root file system, to achieve full system snapshot and rollback.
However, this setup is only supported under the following conditions:
Only linear LVM2 setups are supported.
There must be enough space in the partitioning 'label' (the partition table) for the grub2 bootloader first stage files. If the installation of the grub2 bootloader fails, you will have to create a new partition table. CAVEAT: Creating a new partition table destroys all data on the given disk!
For a migration from an existing SUSE Linux Enterprise 11 system with LVM2 to SUSE Linux Enterprise 12 the /boot
partition must be preserved.
2.1.3 CJK Languages Support in Text-mode Installation#
CJK (Chinese, Japanese, and Korean) languages do not work properly during text-mode installation if the framebuffer is not used (Text Mode selected in boot loader).
There are three alternatives to resolve this issue:
Use English or some other non-CJK language for installation then switch to the CJK language later on a running system using YaST+System+Language.
Use your CJK language during installation, but do not choose Text Mode in the boot loader using F3 Video Mode. Select one of the other VGA modes instead. Select the CJK language of your choice using F2 Language, add textmode=1 to the boot loader command-line and start the installation.
Use graphical installation (or install remotely via SSH or VNC).
SLE 12 is supporting booting systems following UEFI specification up to version 2.3.1 errata C.
Note: Installing SLE 12 on Apple hardware is not supported.
SLES 12 and SLED 12 implement UEFI Secure Boot. Installation media supports Secure Boot. Secure Boot is only supported on new installations, if Secure Boot flag is enabled in the UEFI firmware at installation time.
For more information, see Administration Guide, section Secure Boot.
2.1.6 Current Features and Limitations in a UEFI Secure Boot Context#
Support for Secure Boot on EFI machines is enabled by default.
When booting with Secure Boot mode enabled in the firmware, the following features apply:
Installation to UEFI default boot-loader location with a mechanism to restore boot entries.
Reboot via UEFI.
Xen hypervisor can be booted without MSFT signature.
UEFI get videomode support, the kernel is able to retrieve the video mode from UEFI to configure KMS mode with the same parameters.
UEFI booting from USB devices is supported
Simultaneously, the following limitations apply:
bootloader, kernel and kernel modules must be signed.
kexec and kdump are disabled.
Hibernation (suspend on disk) is disabled.
Access to
/dev/kmem
and/dev/mem
is not possible, not even as root user.Access to I/O port is not possible, not even as root user. All X11 graphical drivers must use a kernel driver.
PCI BAR access through sysfs is not possible.
custom_method
in ACPI is not available.debugfs for
asus-wmi
module is not available.The
acpi_rsdp
parameter does not have any effect on the kernel.
When booting with Secure Boot mode disabled in the firmware, the following features apply:
None of the limitations listed above are active.
The machine always stays bootable, regardless whether secure boot is later toggled in the firmware.
The feature to retain EFI boot-manager entries after firmware updates or NVRAM resets is available even on systems without (or with disabled) Secure Boot support.
Simultaneously, the following limitations apply:
Secure boot on EFI machines can be disabled during installation by deactivating the respective option on the installation settings screen under 'Bootloader'.
If an update fails or causes trouble, it is sometimes helpful to be able to go back to the last working state.
Requirements to Create Atomic Snapshots
Root filesystem needs to be btrfs
Root filesystem needs to be on one device, including
/usr
That is needed since snapshots need to be atomic, and that is not possible if the data is stored on different partitions, devices, or subvolumes.
How to Do the Rollback
During boot, you can select an old snapshot. This snapshot will then be booted in something like a read-only mode. All the snapshot data is read-only, all other filesystems or btrfs subvolumes are in read-write mode and can be modified. To make this snapshot the default for the next reboot and switch it into a read-write mode, use 'snapper rollback'.
What Will Not Be Rolled Back
The following directories are excluded from rollback. This means that changes below this subdirectory will not be reverted when an old snapshot is booted, in order to not lose valuable data. On the other hand, this may prevent some third-party services from starting correctly when booting from an old snapshot.
Known Issues or Limitations
In general, rollback can result in inconsistencies between the data on the root partition (which has been rolled back to an earlier state) and data on other subvolumes or partitions. These inconsistencies may include the use of different file paths, formats and permissions.
Add-ons and third party software installed in separate subvolumes or partitions, such as /opt, can be completely broken after a rollback of a Service Pack.
Newly created users will vanish from
/etc/passwd
during a rollback, but the data is still in/home
,/var/spool
,/var/log
and similar directories. If a new user is created later, it may be given the same user id, making it the owner of these files. This can be a security and privacy problem.If a package update changes permissions/ownership of files/directories inside of a subvolume (like
/var/log
,/srv
, ..), the service may be broken after a rollback, because it is no longer able to write/access/read the files/data.General: if there are subvolumes like
/srv
, containing a mix of code and data, rollback may lead to loss of data or broken/non-functional code.General: if an update to a service introduces a new data format, rolling back to an old snapshot may render the service non-functional, if the older version is unable to handle the new data format.
Rollback of the boot loader is not possible, since all 'stages' of the boot loader must match. However, as there is only one MBR (Master Boot Record) per disk, there cannot be different snapshots of the other stages.
The ISO installation images can be directly dumped to a USB device such as a flash disk. This way you can install the system without the need of a DVD drive.
Several tools for dumping are listed at http://en.opensuse.org/SDB:Live_USB_stick (http://en.opensuse.org/SDB:Live_USB_stick).
YaST has been extended to support installation using IPv6 iSCSI target as root device.
When booting the installer from the DVD product media on a secure boot enabled system, the installation process is validated by the secure boot signature.
For more information about UEFI and secure boot, see the Administration Guide.
This section includes upgrade-related information for this release.
2.2.1 Updating Registration Status After Rollback#
When performing a service pack migration, it is necessary to change the configuration on the registration server to provide access to the new repositories. If the migration process is interrupted or reverted (via restoring from a backup or snapshot), the information on the registration server is inconsistent with the status of the system. This may lead to you being prevented from accessing update repositories or to wrong repositories being used on the client.
When a rollback is done via Snapper, the system will notify the registration server to ensure access to the correct repositories is set up during the boot process. If the system was restored any other way or the communication with the registration server failed for any reason (for example, because the server was not accessible due to network issues), trigger the rollback on the client manually by calling snapper rollback
.
We suggest always checking that the correct repositories are set up on the system, especially after refreshing the service using zypper ref -s
.
2.2.2 POWER: Migration from SLES 11 to SLES 12 Requires Reinstallation#
On POWER, switching from SLES 11 to SLES 12 involves a transition in architecture from big endian (ppc64) to little endian (ppc64le). Because of this transition, a regular system update is not possible.
It is required to install the system as described in the Deployment Guide, section Installation on IBM POWER.
2.2.3 Migrating from SUSE Linux Enterprise Server 11 to 12 on VMware#
We suggest the following steps to perform migrations from SUSE Linux Enterprise Server 11 to 12 on VMware:
Run the VMware uninstall script
/usr/bin/vmware-uninstall-tools.pl
.Perform the actual migration.
Manually install the package
open-vm-tools
.
For information about VMware's support for migrations between major OS versions, see http://kb.vmware.com/kb/2018695 (http://kb.vmware.com/kb/2018695).
While updating from SLES 11 SP3 to 12, previous serial console bootloader settings will be ignored and newly proposed.
To enable a serial console click Booting on summary screen, then on the kernel options tab specify serial console related settings: Disable 'Use graphical console', enable 'Use serial console', and in the text field named 'Console arguments' enter 'serial --unit=0 --speed=115200'.
2.2.5 /tmp Cleanup from sysconfig Automatically Migrated into systemd Configuration#
By default, systemd cleans tmp directories daily, and systemd does not honor sysconfig settings in /etc/sysconfig/cron such as TMP_DIRS_TO_CLEAR. Thus it is needed to transform sysconfig settings to avoid potential data loss or unwanted misbehavior.
When updating to SLE 12, the variables in /etc/sysconfig/cron
will be automatically migrated into an appropriate systemd configuration (see /etc/tmpfiles.d/tmp.conf
). The following variable are affected:
Migration is supported from SUSE Linux Enterprise 11 SP3 (or higher) using the following methods:
Booting from an installation medium (ISO image)
Automated migration from SLE 11 SP3 to 12
For more information, see the Deployment Guide coming with SUSE Linux Enterprise.
For more information, see Section 3, “Infrastructure, Package and Architecture Specific Information”.
3 Infrastructure, Package and Architecture Specific Information#
Ext4 has some features that are under development and still experimental. Thus, using these features poses a significant risk to data. To clearly indicate such features, the Ext4 driver in SUSE Linux Enterprise 12 refuses to mount (or mount read-write) file systems with such features. To mount such file systems set the allow_unsupported
module parameter (either when loading the module or via /sys/module/ext4/parameters/allow_unsupported
). However setting this option will render your kernel, and thus your system unsupported.
Features which are treated this way are: bigalloc, metadata checksumming, and journal checksumming.
Kernel 3.12 no longer provides the /proc/acpi/event
virtual file.
This file has only been used by the acpid daemon in SLE 11. SLE 12 does not ship this package anymore.
[All architectures] CONFIG_COMPAT_BRK has been disabled to allow randomization of the start address of the userspace heap. This can break old binaries based on libc5. To revert to the old behavior, set the kernel.randomize_va_space sysctl
to 2.
[x86_64 only] CONFIG_COMPAT_VDSO has been disabled to enforce randomization of the VDSO address of 32bit binaries on x86_64. This can break 32bit binaries using glibc older than 2.3.3. To revert to the old behavior, specify vdso=2
on the kernel command line.
3.1.1.4 Format of the 'microcode' Field in /proc/cpuinfo Changed#
Due to a missing backport, the SLE 11 SP3 kernel is displaying the microcode revision in /proc/cpuinfo as a decimal number.
The SLE 12 kernel changed the format to a hexadecimal number. Now it is compatible with the mainline kernel.
3.1.1.5 Preparation for Non-linear Memory Mapping Deprecation#
Non-linear mappings are considered for deprecation in upstream as part of code cleanup. Of course, the existing syscall API (remap_file_pages) will stay and will be implemented as an emulation on top of regular mmap interface. To ensure a stable kernel application binary interface (kABI) during SLE 12 lifetime, SUSE is preparing this change. As a result, the first use of the syscall will trigger a warning and the module source code will not compile without modification. If your software encounters this condition, get in touch with your SUSE contact to get support during migration.
The kernel-default package now contains the kernel image and all supported modules. The kernel-default-base package is thus not necessary in normal setups. Also, all the debugging symbols are packaged in the kernel-default-debuginfo package.
Do not attempt to install the kernel-default-base
package unless building a minimal system. When using utilities like crash
or systemtap
, you only need to install the kernel-default-debuginfo
package. The kernel-default-devel-debuginfo
package is no longer needed and does not exist.
zone_reclaim_mode was enabled automatically if distance between any two NUMA nodes is higher than RECLAIM_DISTANCE (which is 30 by for x86_64). This auto tuning has led to many issues in the past and we expect it to cause even more of them in the future as NUMA machines are more widespread.
Now auto-tuning is not active anymore. In sysctl.conf
you can enable it for those loads that need NUMA locality.
By default, the initrd file is now compressed with:
Previously, it was compressed with gzip.
If iTCO_wdt driver is enabled, the sensor driver shows that the service processor is reporting a constant temperature in spite of heavy CPU load or the CPU fan is stopped.
To disable the Intel watchdog functionality, we blacklist the iTCO_wdt driver for SLES, SLED, and SLEPOS installations.
The 'sync_supers' kernel thread will periodically wake-= up and synchronize all dirty superblocks for all the mounted file systems. It makes the system's sleep time shorter, and forces the CPU to leave the low power state every 5 seconds.
This kernel thread is gone and now each file system manages its own superblock in a smart way without waking up the system unnecessarily.
3.1.1.11 Scaling of Dumps to Support 16S/24TB System#
Both kexec-tools and the kernel are updated to support crashkernel sizes larger than 896MB and crashkernels that load above 4GB.
Linux Kernel version 3.3 started supporting SD/SDIO version 3.0 that provides faster read/write speed and enhanced security.
A SDIO (Secure Digital Input Output) card is an extension of the SD specification to cover I/O functions.
Host devices that support SDIO can use the SD slot to support Wi-Fi, Bluetooth, Ethernet, IrDA, etc.
SDIO 3.0 cards and hosts add support for UHS-I bus speed mode, which can be as fast as 104MB/s.
An important requirement for every Enterprise operating system is the level of support a customer receives for his environment. Kernel modules are the most relevant connector between hardware ('controllers') and the operating system.
For more information about the handling of kernel modules, see the SUSE Linux Enterprise Administration Guide.
The netfilter TEE kernel module is now part of the standard kernel.
For legacy reasons, /etc/ssl/certs may only contain CA certificates in PEM format. Because this format does not transport usage information /etc/ssl/certs may only contain CA certificates that are intended for server authentication.
OpenSSL understands a different format that transports the usage information, therefore OpenSSL internally uses a different location, which contains certificates of all kinds of usage type ( /var/lib/ca-certificates/openssl
). If you put a certificate in plain PEM format in /etc/pki/trust/anchors/
and call update-ca-certificates it should end up in both /var/lib/ca-certificates/pem
(i.e., /etc/ssl/certs
) and /var/lib/ca-certificates/openssl
[as well as other locations like the cert bundle or the Java keyring].
3.1.3.2 X.Org: fbdev Used in UEFI Secure Boot Mode (ASpeed Chipset)#
The unaccelerated fbdev driver is used as a fallback in UEFI secure boot mode with the AST KMS driver, EFI VGA, and other currently unknown framebuffer drivers.
Our kernel is compiled with support for Linux Filesystem Capabilities. Since SLE 12, it is enabled by default.
Disable it by adding file_caps=0
as a kernel boot option.
3.1.3.4 Basic Linux-Integrity Enablement (IMA, IMA-Appraisal, EVM)#
IMA, IMA-appraisal, and EVM are configured in SLES-12, but not enabled by default as additional configuration is required (for example enabling TPM, labeling the filesystem).
IMA can be used to attest a system's runtime integrity. IMA measurements are enabled with the boot parameter 'ima_tcb'. This starts a builtin policy which measures all regular files that are executed or read by a process with root uid. The builtin policy can be replaced with a system customized policy, for more information, refer to https://www.kernel.org/doc/Documentation/ABI/testing/ima_policy (https://www.kernel.org/doc/Documentation/ABI/testing/ima_policy).
In order to enforce local file integrity, the filesystem is labeled with good measurements (for examplem hash, signature). IMA-appraisal verifies the current measurement of a file matches the good value. If the values do not match, access is denied to the file. For more information on creating public/private keys used for signing files, loading the public key on the IMA keyring, and labeling the filesystem, refer to http://sourceforge.net/p/linux-ima/wiki/Home/#ima-appraisal (http://sourceforge.net/p/linux-ima/wiki/Home/#ima-appraisal) and http://sourceforge.net/p/linux-ima/wiki/Home/#dracut (http://sourceforge.net/p/linux-ima/wiki/Home/#dracut).
EVM protects integrity sensitive inode metadata against offline attack. For more information on creating trusted/encrypted keys and loading the EVM keyring, refer to http://sourceforge.net/p/linux-ima/wiki/Home/#enabling-evm (http://sourceforge.net/p/linux-ima/wiki/Home/#enabling-evm) and http://sourceforge.net/p/linux-ima/wiki/Home/#dracut (http://sourceforge.net/p/linux-ima/wiki/Home/#dracut).
To operate OpenSSH in FIPS mode, the openssh-fips
RPM package must be additionally installed on the system. This package provides checksums for integrity checking of the openssh
package.
Also, 1024-bit DSA keys are not allowed and should be disabled as they will not work.
For more information, see http://csrc.nist.gov/groups/STM/cmvp/documents/140-1/140sp/140sp2471.pdf (http://csrc.nist.gov/groups/STM/cmvp/documents/140-1/140sp/140sp2471.pdf).
Trusted and Encrypted Keys are now built-in to support EVM. More information can be found here https://www.kernel.org/doc/Documentation/security/keys-trusted-encrypted.txt (https://www.kernel.org/doc/Documentation/security/keys-trusted-encrypted.txt).
3.1.3.7 Change of Default Locations for Root Certificates#
Using /etc/ssl/certs or even a single bundle file to store SSL root certificates makes it impossible to separate package and administrator provided files. Package updates would therefore either not actually update the certificate store or overwrite administrator changes.
A new location is now used to store trusted certificates:
/usr/share/pki/trust/anchors/
and/etc/pki/trust/anchors/
for the root CA certificates/usr/share/pki/trist/blacklist/
and/etc/pki/trust/blacklist/
for blacklisted certificates
A helper tool called 'update-ca-certificates' is used to propagate the content of those directories to the certificate stores used by openssl, gnutls, and openjdk.
/etc/ssl/certs
links to an implementation-specific location managed by p11-kit. It must not be used by the administrator anymore.
Administrators must put local CA certificates into /etc/pki/trust/anchors/
instead and run the update-ca-certificates tool to propagate the certificates to the various certificate stores.
With SLES 11 SP1, OpenSSL compresses data before encryption with impact on throughput (down) and CPU load (up) on platforms with cryptographic hardware. Starting with SLES 11 SP2 the behavior is adjustable by the environment variable OPENSSL_NO_DEFAULT_ZLIB depending on customer requirements.
By default, compression in OpenSSL is now turned off.
Set OPENSSL_NO_DEFAULT_ZLIB per application or in a global configuration file.
dmesg
was providing all kinds of system internal information to any users. It includes kernel addresses, crashes of services, and similar things that could be used by local attackers.
The use of dmesg
is now restricted to the root user.
3.1.3.10 Increased Key Lengths for the SSH Service#
Cryptographic advances and evaluations strongly suggest no longer to use key smaller than 2048 bit length. This is codified in various standards, for example NIST SP 800-131A or BSI TR-02102.
SSH was updated to generate RSA keys with at least 2048 bits key length and Elliptic Curve DSA keys of at least 256 bit key length.
The DSA keysize should also be incremented, but due to portability issues 1024 bit are still allowed. We recommend not to use or generate DSA keys, or try to use 2048 or larger keys, but watch for interoperability issues.
3.1.3.11 cURL Now Provides SFTP and SCP Protocols#
Customers were missing support of the encrypted 'SFTP' and 'SCP' (SSH based) file transfer protocols in the cURL library.
The SFTP and SCP protocols have been enabled in the cURL library.
Since SLES 11 SP3, the GSSAPIKeyExchange
mechanism (RFC 4462) is supported. This directive specifies how host keys are exchanged. For more information, see the SLES Security Guide, Network Authentication with Kerberos.
Use udisks2
to restrict access to removable media. For more information, see the Security and Hardening Guide.
The seccheck
package comes with a shell script that allows configuring autologout functionality. For more information, install the seccheck
package and see the help output:
Note: The autologout cron job is disabled by default. To enable the functionality, uncomment the cron job line.
3.1.4.1 Intel 10GbE PCI Express Adapters: Setting MAC Address of a VF Interface#
When using the ixgbe and ixgbevf drivers (Intel 10GbE PCI Express adapters) in the context of SRIOV, it is possible to set the MAC address of a VF interface via two methods:
through the PF: This would typically be done on the virtualization host using a command such as
ip link set p4p1 vf 0 mac d6:2f:a7:28:78:c2
through the VF: This would typically be done on the virtualization guest using a command such as
ip link set eth0 address d6:2f:a7:28:78:c2
Initially, either methods are permitted. However, after the administrator has explicitly configured a MAC address for a VF through its PF, the ixgbe driver disallows further changes of the MAC address through the VF. For example, if an attempt is made to change the MAC address through the VF on a guest after the MAC address for this device has been set on the host, the host will log a warning of the following form:
To avoid this problem, either avoid configuring an address for the VF through the PF (on the virtualization host) and let a trusted guest set whatever MAC address is desired, or set the desired MAC address through the PF such that further changes through the VF are not needed.
The UUID generation daemon (uuidd) which generates universally unique identifiers (UUIDs). As released with SLES 12 GA, the systemd preset has 'default off' for 'use socket activation for uuidd'.
The post-GA update come with a changed systemd preset. This update fixes the use and behavior of uuidd.
If you install the updated package on a system where the SLES 12 GA version is not installed, the new preset is in place, this means 'use socket activation for uuidd' is applied during the installation and the service works out of the box.
If you update the package on a system where the SLES 12 GA version is installed, the new preset is not enforced. This means the old setting will stay in place. In this case it is recommended to switch to the proposed new default behavior by starting uuidd on first use. Recommended commands:
nfs-utils 1.2.9 changed the default so that NFSv2 is not served unless explicitly requested.
If your clients still depend on NFSv2, enable it on the server by setting
NFSD_OPTIONS='-V2' MOUNTD_OPTIONS='-V2'
in /etc/sysconfig/nfs. After restarting the service, check whether version 2 is available with:
cat /proc/fs/nfsd/versions > +2 +3 +4 +4.1 -4.2
If your clients still depend on NFSv2, enable it on the server by setting
in /etc/sysconfig/nfs
. After restarting the service, check whether version 2 is available with the command:
Depending on your XDMCP client, the following configurations are supported:
GNOME 3 and gdm require a number of recent X Extensions as specified and implemented by X.Org in Xserver 1.12 or later. Among them are XFixes version 5 or later and XInput (Xi) version 2.2 or later. Also extensions to GLX such as GLX_EXT_texture_from_pixmap are required. An X server used to remotely connect over XDMCP must support these extensions.
If these extensions are available from your X server (such as Xorg or Xephyr), the default settings for the display manager (gdm) and for the window manager (GNOME3/sle-classic) should be used.
If some extensions are missing from your X server (such as Xnest) which is used to connect to the XDMCP display manager, 'xdm' should be used as the display manager (set
DISPLAYMANAGER='xdm'
in/etc/sysconfig/displaymanager
) while 'icewm' should be set for the window manager (DEFAULT_WM='icewm'
in/etc/sysconfig/windowmanager
).Note: The network traffic used with XDMCP is not encrypted.
As an alternative to XDMCP, VNC can be used to connect remotely to a graphical interface. This does not impose any specific requirements on X extensions.
For a nested Xserver, Xephyr is the preferred choice over Xnest.
Within the wicked family of tools, the nanny
daemon is a policy engine that is responsible for asynchronous or unsolicited scenarios such as hotplugging devices.
The nanny
framework is not enabled by default in SUSE Linux Enterprise 12. To enable it either specify 'nanny=1' in the installer (linuxrc) as a boot parameter or activate after the installation it in /etc/wicked/common.xml
:
After a change at runtime, restart the network:
For more information, see the SUSE Linux Enterprise Admin Guide, Section The wicked Network Configuration.
The cachefilesd has been included with a SLE 11 SP2 maintenance update.
The cachefilesd user-space daemon manages persistent disk-based caching of files that are used by network file systems such as NFS. cachefilesd can help with reducing the load on the network and on the server because some of the network file access requests get served by the local cache.
3.1.4.7 PCI Multifunction Device Support (LAN, iSCSI and FCoE)#
YaST FCoE client ( yast2 fcoe-client
) is enhanced to show the private flags in additional columns to allow the user to select the device meant for FCoE. YaST network module ( yast2 lan
) excludes storage only devices for network configuration. The underlying tool hwinfo
reads the private flags from the device and provides the information for YaST, which allows the user to select the correct FCoE device.
With NETCONFIG_DNS_RESOLVER_OPTIONS in /etc/sysconfig/network/config
you can specify arbitrary options that netconfig will write to /etc/resolv.conf
.
For more information about available options, see the resolv.conf
man page.
3.1.4.9 Intel 10GbE PCI Express Adapters: Setting MAC Address of a VF Interface#
When using the ixgbe and ixgbevf drivers (Intel 10GbE PCI Express adapters) in the context of SRIOV, it is possible to set the MAC address of a VF interface via two methods:
through the PF: This would typically be done on the virtualization host using a command such as
ip link set p4p1 vf 0 mac d6:2f:a7:28:78:c2
through the VF: This would typically be done on the virtualization guest using a command such as
ip link set eth0 address d6:2f:a7:28:78:c2
Initially, either methods are permitted. However, after the administrator has explicitly configured a MAC address for a VF through its PF, the ixgbe driver disallows further changes of the MAC address through the VF. For example, if an attempt is made to change the MAC address through the VF on a guest after the MAC address for this device has been set on the host, the host will log a warning of the following form:
To avoid this problem, either avoid configuring an address for the VF through the PF (on the virtualization host) and let a trusted guest set whatever MAC address is desired, or set the desired MAC address through the PF such that further changes through the VF are not needed.
3.1.4.10 IP-over-InfiniBand (IPoIB) Mode Configuration#
When creating or editing a configuration for an IPoIB device via yast2-network
( YaST Control Center > Network Settings ) it is possible to select its mode. The device's ifcfg
is updated accordingly.
While fixing issues in the operating system, you might need to install a Problem Temporary Fix (PTF) into a production system. Those packages provided by SUSE are signed with a special PTF key. In contrast to SUSE Linux Enterprise 11, this key is not imported by default on SLE 12 systems.
To manually import the key, use the following command:
rpm --import /usr/share/doc/packages/suse-build-key/suse_ptf_key.asc
After importing the key, you can install PTF packages on SLE 12.
libzypp-14.39.0 will per default check a downloaded rpm packages signature, if the corresponding repositories metadata are not gpg signed or the signature was not verified.
Customers using unsigned repositories may experience that zypper/yast now ask whether to accept a package whose signature can not be checked because the signing key is not known [4-Signatures public key is not available]:
Ignoring the error will install the package despite the failed signature verification. It's not recommended to chose this option unless it's known, that the gpgkey (with key ID <as displayed>) which was used to sign the package is trusted (but it was not imported into the rpm database).
The message can be avoided by manually importing the missing trusted key into the rpm database (using 'rpmkeys --import' PUBKEY ).
Other signature verification errors than [4-Signatures public key is not available] should not be ignored.
Customers using only signed repositories should experience no difference.
The default of checking either the repo metadata signature or the rpm packages signatures can be tuned globally (in /etc/zypp.conf
) or per repo (editing the corresponding .repo file in /etc/zypp/repos.d
). Explicitly setting repo_gpgcheck or pkg_gpgcheck will overwrite the defaults.
3.2.3 Connection to VNC Integrated in GNOME Environment (vino)#
vino (VNC server integrated in GNOME desktop environment) is using by default a encrypted connection (TLS), which might not be supported by all VNC clients on all platforms.
You can disable encryption on vino by running the following command as a regular user
or by using the dconf-editor graphical tool, available from GNOME Control Center.
Known VNC clients with support for TLS encryption are 'vinagre' (GNOME VNC client), virt-viewer (libvirt VM client, available for Windows from http://virt-manager.org/download/ (http://virt-manager.org/download/) ).
SUSE Linux Enterprise 12 supports the new on-disk format
(v5) of the XFS file system. XFS file systems created by YaST will use this new format. The main advantages of this format are automatic checksumming of all XFS metadata, file type support, and support for a larger number of access control lists for a file.
Caveat: Pre SLE 12 kernels, xfsprogs
before version 3.2.0, and the grub2 bootloader before the one released in SLE 12 do not understand the new file system format and thus refuse to work with it. This can be problematic if the file system should also be used from older or other distribution.
If you require interoperability of the XFS file system with older or other distributions, format the filesystem manually using the mkfs.xfs
command. That will create a filesystem in the old format unless you use the '-m crc=1'
option.
3.2.5 Support for Web Services for Management (WS Management)#
The WS-Management protocol is supported via Openwsman, providing client ( wsmancli
package) and server ( openwsman-server
package) implementations.
Openwsman, by using the WS-Management standard, can interface to:
winrm (Microsoft Windows)
vPro/iAMT (Intel)
iDRAC (Dell)
VMware ESX/ESXi, vCenter, vSphere (VMware API)
Cisco UCS (Cisco)
It is possible to run SUSE Linux Enterprise 12 on a shared read-only root file system. A read-only root setup consists of the read-only root file system, a scratch and a state file system. The /etc/rwtab
file defines, which files and directories on the read-only root file system are replaced with which files on the state and scratch file systems for each system instance.
The readonlyroot
kernel command line option enables read-only root mode; the state=
and scratch=
kernel command line options determine the devices, on which the state and scratch file systems are located.
In order to set up a system with a read-only root file system, set up a scratch file system, set up a file system to use for storing persistent per-instance state, adjust /etc/rwtab
as needed, add the appropriate kernel command line options to your boot loader configuration, replace /etc/mtab
with a symlink to /proc/mounts
as described below, and (re)boot the system.
Replace /etc/mtab
with the appropriate symbolic links:
See the rwtab(5) manual page for more information and http://www.redbooks.ibm.com/abstracts/redp4322.html (http://www.redbooks.ibm.com/abstracts/redp4322.html) for limitations on System z.
SLE12 has moved to Systemd, a new way of managing services. For more information, see the SUSE Linux Enterprise Admin Guide, Section The Systemd Daemon.
Time synchronization with microsecond precision across a group of hosts in a data center is challenging to achieve without extra hardware.
Support for Precision Time Protocol version 2 leveraging the new time synchronization feature of modern network interface cards has been included in SUSE Linux Enterprise Server 12. For taking advantage of the precise time synchronization install the new linuxptp package and refer to the documentation in the /usr/share/doc/packages/linuxptp
directory.
schedtool
has been replaced by chrt
, which is part of the standard util-linux
package. chrt also handles all scheduler classes.
Note, chrt requires a priority to be provided for all normal scheduling classes as well as realtime classes. For example, to set your shell to SCHED_FIFO priority 1, enter:
To set it back to SCHED_OTHER:
'0' is the only valid (and required) priority for SCHED_OTHER, SCHED_BATCH, and SCHED_IDLE classes, priorities 1-99 are realtime priorities.
On SUSE Linux Enterprise 11, the bind mount in /etc/exports
was mandatory. It is still supported, but now deprecated.
Configuring directories for export with NFSv4 is now the same as with NFSv3.
3.2.11 Intel AMT (Active Management Technology) Support#
Intel AMT (Active Management Technology) is hardware-based technology for remotely managing and securing PCs out-of-band.
Intel MEI (Management Engine Interface) is a driver in Linux kernel, it allows applications to access the Intel ME (Management Engine) FW via the host interface; and the MEI driver is used by the AMT Local Manageability Service (LMS).
Systemd can restart services if they crash. Where appropriate, Restart=on-abort
is set in the services files.
For configuring macvlan interface, see the ifcfg-macvlan(5)
man page.
To change the usage of delta RPMs during the update it was needed to edit /etc/zypp/zypp.conf
and set download.use_deltarpm
to 'false'.
In the YaST Online Update Configuration dialog you can now activate delta RPMs usage by checking Use delta rpms. This setting will change the configuration file in the background.
3.2.15 SuSEconfig.permissions Replaced by chkstat#
It is no longer possible to set file permissions with SuSEconfig --module permissions
.
If you want to set the file permissions as defined in /etc/permissions.*
, run
3.3.1 Using the 'noop' I/O Scheduler for Multipathing and Virtualization Guests#
For advanced storage configurations like 4-way multipath to an array, we will end up with an environment where the host OS is scheduling I/O and the storage array is scheduling I/O. It is a common occurrence that those schedulers end up competing with each other and ultimately degrade performance. Because the storage array has the best view of what the storage is doing at any given time, enabling the noop
scheduler on the host is telling the OS to just get out of the way and let the array handle all of the scheduling.
Following the same rationale, also for block devices within virtualization guests the noop
I/O scheduler should be used.
To change the I/O scheduler for a specific block device, use:
For more information, see the SUSE Linux Enterprise System Analysis and Tuning Guide.
Pixz (pronounced 'pixie') is a parallel, indexing version of XZ. It takes advantage of running LZMA compression of multiple parts of an input file on multiple cores simultaneously. The resulting file contains an index of the data blocks, which enables random access to the data
3.3.3 Enabling VEBOX on Haswell in the drm/i915 Kernel Driver#
Linux Cloud Video Transcode is an Intel GEN based hardware solution to support high quality and performance video transcoding on a server. With enabling VEBOX on Haswell for some video pre and post process features like DN/ADI SUSE Linux Enterprise features improved transcode quality.
On systems with a high NFS load, connections may block.
To work around such performance regressions with NFSv4, you could open more than one TCP connection to the same physical host. This could be accomplished with the following mount options:
To request that the transport is not shared use
Where N
is unique. If N
is different for two mounts, they will not share the transport. If N
is the same, they might (if they are for the same server, etc).
3.3.5 Performance and Scaling Improvements to Support 16S/24TB Systems#
Currently, reading /proc/vmcore is done by read_oldmem
that uses ioremap/iounmap per a single page. For example, if memory is 1GB, ioremap/iounmap is called 1GB / 4KB times, that is 262144 times. This causes big performance degradation due to repeated page table changes, TLB flush, and build-up of VM related objects.
To address the issue, SLES does the following:
Applying
mmap
on /proc/vmcore to improve read performance under sufficiently large mapping size.Reducing the number of TLB flush by large mapping size.
Both
mem_map
for dump filtering and page frames are consecutive data.No copying from kernel space to user space
The current main user of this mmap
call is makedumpfile, which not only reads memory from /proc/vmcore but also does processing like filtering, compression, and I/O work.
3.4.1 /dev/disk/by-path/ Links for virtio Disks No Longer Available#
Because virtio numbers are not stable, by-path links for virtio disks are no longer available. These names are not persistent.
Btrfs is a copy-on-write (CoW) general purpose file system. Based on the CoW functionality, Btrfs provides snapshotting. Beyond that data and metadata checksums improve the reliability of the file system. Btrfs is highly scalable, but also supports online shrinking to adopt to real-life environments. On appropriate storage devices Btrfs also supports the TRIM command.
Support
The Virtualization Infrastructure Driver Vid Is Not Running Server 2012
With SUSE Linux Enterprise 12, Btrfs is the default file system for the operating system, xfs is the default for all other use cases. We also continue to support the Ext-family of file systems, Reiserfs and ocfs2. Each file system offers distinct advantages. Customers are advised to use the YaST partitioner (or AutoYaST) to build their systems: YaST will prepare the Btrfs file system for use with subvolumes and snapshots. Snapshots will be automatically enabled for the root file system using SUSE's snapper infrastructure. For more information about snapper, its integration into ZYpp and YaST, and the YaST snapper module, see the SUSE Linux Enterprise documentation.
Migration from 'Ext' and Reiserfs File Systems to Btrfs
Migration from existing 'Ext' file systems (Ext2, Ext3, ext4) and Reiserfs is supported 'offline' and 'in place', if the original filesystem has been created with a 4k block size (this is the case for most file systems on the x86-64 and System z architectures). Calling 'btrfs-convert <device>' will convert the file system. This is an offline process, which needs at least 15% free space on the device, but is applied in place. Roll back: calling 'btrfs-convert -r <device>' will roll back. Caveat: when rolling back, all data will be lost that has been added after the conversion into Btrfs; in other words: the roll back is complete, not partial.
RAID
Btrfs is supported on top of MD (multiple devices) and DM (device mapper) configurations. Use the YaST partitioner to achieve a proper setup. Multivolume Btrfs is supported in RAID0, RAID1, and RAID10 profiles in SUSE Linux Enterprise 12, higher RAID levels are not yet supported, but might be enabled with a future service pack.
SWAP files
Using swap files on top of Btrfs is not supported. In general, we are advising to use partitions for swapping, and not swap files on top of any file system for performance reasons.
Future Plans
Compression functionality for Btrfs is currently under development and will be supported once the development has matured.
We are committed to actively work on the Btrfs file system with the community, and we keep customers and partners informed about progress and experience in terms of scalability and performance. This may also apply to cloud and cloud storage infrastructures.
Filesystem Maintenance, Online Check, and Repair Functionality
Check and repair functionality ('scrub') is available as part of the Btrfs command line tools. 'Scrub' is aimed to verify data and metadata assuming the tree structures are fine. 'Scrub' can (and should) be run periodically on a mounted file system: it runs as a background process during normal operation.
We recommend to apply regular 'maintenance' to the Btrfs file system to optimize performance and disk usage. Specifically we recommend to 'balance' and 'defrag' the file system on a regular basis. Check the 'btrfs-maintenance' package and see the SUSE Linux Enterprise documentation for more information.
Capacity Planning
If you are planning to use Btrfs with its snapshot capability, it is advisable to reserve twice as much disk space than the standard storage proposal. This is automatically done by the YaST2 partitioner for the root file system.
Backward compatibility - Hard Link Limitation
Previous products had a limitation on low hard link count per file in a directory. This has been fixed and is 65535 now. It requires a file system created with '-O extref', which is done by default. Caveat: Such a file system might not be mountable on older products.
Backward compatibility - Enhanced metadata
The file systems are by default created with a more space efficient format of metadata, the feature is called 'skinny-metadata' for mkfs. Caveat: Such a file system will not be mountable on previous products.
Backward compatibility - metadata block size is 16k
The default metadata block size has changed to 16 kilobytes, reducing metadata fragmentation. Caveat: Such a file system will not be mountable on older products.
Other Limitations
At the moment, Btrfs is not supported as a seed device.
For More Information
For more information about Btrfs, see the SUSE Linux Enterprise documentation.
The Btrfs file system has a group of features that we are classifying as 'supported' and another group classified as 'unsupported'. By default, these unsupported features will not be available unless a customer uses the Btrfs module parameter allow_unsupported=1
.
Supported:
Copy on Write
Snapshots Subvolumes
Metadata Integrity
Data Integrity
Online metadata scrubbing
Manual defragmentation
Manual deduplication
Quota groups
Not Supported (not yet mature):
Inode Cache
Auto defrag
RAID
Compression
Send / Receive
Hot add / remove
Seeding devices
Multiple devices
'Big' metadata
This list will change over time, as feature are maturing.
With SUSE Linux Enterprise 12, the default file system in new installations was changed from Ext3 to Btrfs for the root system partition. XFS is the default file system for the /home
partition and other data partitions.
In the expert partitioner, the default file system is Btrfs. The user can change it if another file system is more suitable to accomplish the intended workload.
POWER Architecture
On POWER, the pagesize is 64K. Due to the assumption made by Btrfs regarding data blocksize (i.e. data blocksize being equal to the page size), a Btrfs installation on POWER will use a blocksize of 64K. This means that a Btrfs created on x86 will not be mountable and readable via Btrfs on POWER, and vice versa.
If data sharing in mixed architecture environments is a major concern, make sure to use XFS on POWER for data partitions.
Identical data should not be stored more then once to save storage space.
SUSE Linux Enterprise supports the data deduplication feature of the Btrfs file system. To achieve the deduplication it replaces identical contents (blocks) with logical links to a single copy of the block in a common storage location.
The deduplication is performed out-of-band (also called post-process or offline) using a specialized tool.
3.4.6 LVM: PV Command to Display the PEs in Use by LVs#
This is the command to display the PEs in use by LVs:
GRUB2 offers support for PReP partitions on GUID Partition Table (GPT) disks.
Names for 'md' RAID devices, particularly as they appear in /proc/mdstat, traditionally have numeric names like 'md4'. Working with these names can be clumsy.
In SLE-12 the option is available to use textual names. Adding the line CREATE names=yes to /etc/mdadm.conf
will cause names like md_home to be used in place of e.g. md127 if a name was given when creating the array. This will likely be enabled by default in future releases of SLE.
3.4.9 Mounting NFS Volumes Locally on the Exporting Server#
With SUSE Linux Enterprise 12, it is now possible to mount NFS volumes locally on the exporting server.
3.4.10 Btrfs: Parameter to Enable Unsupported Features#
Btrfs has a number of features that for reasons of instability or immaturity SUSE chooses not to support in the enterprise releases. In order to avoid undesired failures, we can disable those features in the code.
The module parameter to enable unsupported features is called allow_unsupported.
To test out those unsupported features, you can enable them optionally with a module flag ( allow_unsupported=1
) that also taints the module as unsupported. Alternatively, the same can be achieved by writing 1 to the module parameter exported in /sys/module/btrfs/parameters
.
Denied mount:
inode_cache
autodefrag
Compression
Seeding Device
Runtime operations that will be denied:
Fallocate and Hole Punch
Receive
Send (NO_FILE_DATA mode is allowed)
Device Replace
An attempt to mount or use a disallowed ioctl fails with an 'operation not supported' error code and prints a message into the syslog regarding the supportability and that allow_unsupported=1
would allow that.
3.4.11 Dynamic Aggregation of LVM Metadata via lvmetad#
Most LVM commands require an accurate view of the LVM metadata stored on the disk devices in the system. With the current LVM design, if this information is not available, LVM must scan all the physical disk devices in the system. This requires a significant amount of I/O operations in systems with a large number of disks.
The purpose of the lvmetad daemon is to eliminate the need for this scanning by dynamically aggregating metadata information each time the status of a device changes. These events are signaled to lvmetad by udev rules. If lvmetad is not running, LVM performs a normal scan.
This feature is disabled by default in SLES 12. To enable it, refer to the use_lvmetad
parameter in the /etc/lvm/lvm.conf
file.
The reiserfs
file system was fully supported for the lifetime of SLES 11 specifically for migration purposes.
With SLES 12, reiserfs
is still fully supported for use but support for creating new reiserfs
file systems has been removed from YaST.
SUSE Linux Enterprise Virtual Machine Driver Pack is a set of paravirtualized device drivers for Microsoft Windows operating systems. These drivers improve the performance of unmodified Windows guest operating systems that are run in virtual environments created using Xen or KVM hypervisors with SUSE Linux Enterprise Server 10 SP4, SUSE Linux Enterprise Server 11 SP3 and SUSE Linux Enterprise Server 12. Paravirtualized device drivers are installed in virtual machine instances of operating systems and represent hardware and functionality similar to the underlying physical hardware used by the system virtualization software layer.
SUSE Linux Enterprise Virtual Machine Driver Pack 2.2 new features include:
Support for SUSE Linux Enterprise Server 12
Support for new Microsoft Windows operating systems: Windows Server 2012 R2 and Windows 8.1
Support for virtual to virtual migration (moving guest from Xen to KVM)
Windows Guest Agent for better host to guest communication
For more information on VMDP2.2 refer to the official documentation.
With this new Intel processor, some new instructions are available:
Floating-Point Fused Multiply-Add to reduced the latency for Floating-Point Addition. The 256-bit Integer vectors which can be useful in scientific computations or other numerically intensive applications. The big-endian move instruction (MOVBE) adds a big-endian move instruction (MOVBE) that can convert to and from traditional x86 little-endian format. The HLE/HLE+ (Hardware Lock Elision) provides a legacy compatible instruction set interface for transactional execution.
3.5.2.2 Paravirtualization Random Number Generators#
Virtual appliances need a good source of entropy, for instance when generating key material during installation. Without any physical source of entropy (like disk interrupts) such operations may take a rather long time. One way to improve the situation in the future may be to include PV drivers that expose hardware RNGs to the guest, such as virtio-rng
.
This feature is available since SLE 11 SP2.
pvpanic
is a paravirtualized device for reporting guest panic events to management tools. By default pvpanic is disabled, add the flag -device pvpanic
to enable it.
A bio-based I/O path for virtio-blk is available, which improves I/O performance for a VM which is using a high-speed device such as an SSD.
3.5.2.5 Containment of error when an SR-IOV device encounters an error#
If an SR-IOV device encounters an error and if any of the VFs belonging to the SR-IOV device is assigned to a guest(s), the affected guest(s) are brought down without impacting any other running guests or the host.
3.5.2.6 KVM: Include Support for Multi-queue Networking#
Multi-queue networking (VMDq, Netqueue, etc.) in a hypervisor environment improves performance for 10G Ethernet and higher.
3.5.2.7 KVM: CPU Scheduling and Spinlock Improvements#
VCPU Scheduling Improvements: Implement changes to improve the performance of workloads in a virtualized environment (KVM) by addressing current inefficiencies in the scheduling of vCPUs.
3.5.2.8 KVM NUMA Home Noding Performance Improvements#
NUMA Home noding: Optimize the locality of CPU and memory across NUMA nodes for better performance.
Support will be provided to limit QEMU to only the system calls that it requires. New seccomp Kernel functionality is intended to be used to declare the whitelisted syscalls and syscall parameters. This will limit QEMU's syscall footprint, and therefore the potential Kernel attack surface. The idea is that if an attacker were to execute arbitrary code, they would only be able to use the whitelisted syscalls.
TPM passthrough provides QEMU a means to allocate a host TPM device to a guest. The host device passed to a guest must be used exclusively by the guest to which it is allocated. Either a physical TPM or a software TPM routed through the host kernel via CUSE may be used as the host TPM device.
Following is an example domain XML fragment to specify use of TPM passthrough.
Where:
model is the host TPM model; defaults to 'tpm-tis'
backend is the backend device type; must be 'passthrough'
path is the path to the host TPM device; defaults to
/dev/tpm0
This feature is disabled by default. For more information refer to http://wiki.qemu.org/Features/TPM (http://wiki.qemu.org/Features/TPM)
3.5.2.11 KVM: Multiple TX Queue Support in macvtap/virtio-net (DCN)#
This allows a single guest to transmit multiple flows of network data using multiple CPUs simultaneously via multiple TX queues in virtio-net/vhost-net/macvtap.
The guest agent ( qemu-ga
) allows programs on the VM Host Server to directly communicate with a VM Guest via an emulated or paravirtualized serial console.
The SLE12 kernel provides a new method of accessing PCI devices from userspace called vfio. The VFIO driver is an IOMMU/device agnostic framework for exposing direct device access to userspace, in a secure, IOMMU protected environment. This new access method to PCI devices will be used by default using libvirt framework.
To be able to more efficient for adjusting the workload's demands on running guest, KVM is now able to hot add Virtual CPUs (vCPUs) on a running virtual machine.
3.5.3.1 Non-standard PCI Device Functionality May Render Pass-through Insecure#
Devices with capabilities or defects that are undocumented or that virtualization software is unaware of may allow guests to control parts of the host that they should not be in control of.
For more information, see http://xenbits.xen.org/xsa/advisory-124.html (http://xenbits.xen.org/xsa/advisory-124.html).
Multiple XEN watchdog instances are not supported. Enabling more than one instance can cause system crashes.
3.5.3.3 Importing SLES 11 Managed Domains from xend to libvirt#
The new xen2libvirt
tool provides an easy way to import domains managed by the deprecated xm/xend toolstack into the new libvirt/libxl toolstack. Several domains can be imported at once using its --recursive
mode.
For more information about the migration from xend/xm
to xl/libxl
, see the Virtualization Guide.
The pygrub
command is used to boot a virtual Xen machine according to a certain menu.lst
entry. Since SLES 11 SP3 pygrub
accepts the new flag [-l --list_entries]
to show GRUB entries in the guest.
3.5.4.1 zypper-docker, the Updater for Docker Images#
To discover if a Docker container is in need of an update, a manual zypper lu
was needed. After patching, the changes had to be committed to make them persistent, and the container needed to be restarted. This was necessary for each container.
Use zypper-docker
to list and apply updates to your Docker images. This ensures any container based on the given image will receive the updates.
The SLES 12 SP1 docker image is now added to the containers module.
The SLES 11 SP4 docker image is now added to the containers module.
To get a console to login on a container, the command: machinectl login lxc-containername
is known not to work. Use virsh -c lxc:/// console containername
instead.
Since SUSE Linux Enterprise Server 12, LXC is integrated into the libvirt library. This decision has several advantages over using LXC as a separate virtualization solution. The extra LXC component is obsolete now.
Guest block devices provided by files instead of physical storage did grow over time, even if parts of it are unused. The guest file system had no way to notify the back-end about unused blocks. As a result, the backing store required more disk space than needed.
libxl
and libvirt
provide settings for file backed storage to handle discard requests from KVM and Xen guests. Xen guests have discard support enabled per default. For KVM guests discard must be enabled in the guest configuration file.
In case the backing file was intentional created non-sparse the discard support must be disabled to avoid fragmentation of the file. The xl domU.cfg
syntax looks like this:
For libvirt based guests, the option discard='ignore'
must be added to the devices driver part of the XML file.
Discard requires file system support. For local file systems, only xfs and ext4 support the hole punching feature. Remote storage such as NFS has no support for discard, even if the backing store on the server would support it.
libvirt
now communicates with the Linux auditing subsystem on the host to issue records for a number of VM operations. This enhancement allows administrative users to collect a detailed audit trail of VM lifecycle events and resource assignments. A new tool, auvirt, is available to conveniently search the Linux audit trail for VM events.
Additional information on VM auditing is available in this article: Kvm libvirt audit (http://www.ibm.com/developerworks/opensource/library/l-kvm-libvirt-audit/index.html)
3.5.5.4 libvirt: dynamic allocation of Virtual Functions (VFs)#
Dynamic assignment from a pool of VFs will allow to utilize SR-IOV cards and VM migration.
3.5.5.5 libvirt: Support DHCP Snooping and Dynamic ARP Inspection#
Libvirt now support DHCP Snooping and Dynamic ARP Inspection to protect the network from rogue DHCP servers and to drop packets with invalid IP/MAC bindings to/from the guests.
Qbg enabled switches to perform better when migration VMs from one switch port to another. Enhancement of Qemu/KVM guest migration to include hooks to 'de-associate or move to pre-associate' on source prior to suspend and restart on target.
3.5.5.7 libvirt: extend support for lldpad synchronization#
When the VSI information is modified in the switch, lldpad synchronization keeps the VMs from losing network connectivity.
Currently it is not possible to boot SLES 12 from a virtual DVD on a Windows Server 2012 host using Hyper-v. For more information, see bnc#876640.
The new Yast2 virtualization tools allow you to install only selected components for Xen, KVM or containers: the server part (hypervisor only), or/and all tools needed to do administration of VM guests. The yast module name as changed, the old named as changed to virtualization, so to launch from command line use:
Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application.
The system time of a guest will drift several seconds per day.
To maintain an accurate system time it is recommended to run ntpd
in a guest. The ntpd daemon can be configured with the YaST NTP Client module. In addition to such a configuration, the following two variables must be set manually to yes
in /etc/sysconfig/ntp
:
Windows Server 2012 R2 will support Gen2 VMs. For more information, see http://technet.microsoft.com/en-us/library/dn282285.aspx (http://technet.microsoft.com/en-us/library/dn282285.aspx) .
SLES 12 has been modified to provide full Gen2 VM support. Hyper-V Gen2 technology support: PXE boot by using a standard network adapter, boot from a SCSI virtual hard disk, boot from a SCSI virtual DVD, secure Boot (enabled by default), UEFI firmware support.
Libguestfs
is a set of tools for accessing and modifying virtual machine disk images. It can be used for many virtual image managements tasks such as viewing and editing files inside guests (only Linux one are enable), scripting changes to VMs, monitoring disk used/free statistics, performing partial backups, and cloning VMs. See the SLE Virtualization Guide for more information and usage.
The updated drivers provide the following features:
A userland daemon to handle the file copy service is included.
The VMBUS driver utilizes all virtual CPUs (vCPUs) to communicate with the host, this will improve performance.
Support for Generation2 VMs is included. 'Secure Boot' must be disabled in the VM settings on the host side, otherwise the VM will not start.
Support for kdump and kexec when running on Windows Server 2012R2 is included.
The network driver was updated to remove the warning about outdated 'Integration Services' that was shown in the Hyper-V Manager GUI.
3.5.6.8 virt-manager: Default to Launching virt-install#
virt-install
is now the default installer when the Create VM button is selected in virt-manager. vm-install
will still to be shipped on the media but will be supported as a deprecated tool: bugs may be fixed, but no new features will be added. For more information, see the SLE Virtualization Guide and respective man pages.
In the past, it was necessary to install VMware tools separately, because they had not been shipped with the distribution.
SUSE Linux Enterprise 12 includes the open-vm-tools
package. These tools are pre-selected when installing on a VMware platform.
Partnering with VMware, SUSE provides full support for these tools. For more information, see http://kb.vmware.com/kb/2073803 (http://kb.vmware.com/kb/2073803).
3.5.6.10 iproute2: Base Enablement for VXLAN Based Network Virtualization#
Provides iproute2 extensions to configure VXLAN tunnel end points.
3.5.6.11 Kernel: Base Enablement for VXLAN Based Network Virtualization#
Provides kernel changes to configure VXLAN tunnel end points.
3.5.6.12 lldpad: 802.1Qbg Support over Bonded Interface#
Bonding interfaces allows the aggregation of bandwidth across multiple physical links to a switch to take full advantage of the 802.1Qbg capabilities.
virt-manager
is more friendly: on the Details page of the created filesystem, the value are now editable.
4 AMD64/Intel64 64-Bit (x86_64) Specific Information#
In the past, the default settings of trackpoint or pointing stick devices were different on various machines, and thus the behavior of these devices was not consistent.
These days people prefer to use the combination of trackpoint or pointing stick and middle button for scrolling. This means pressing the middle button while moving the trackpoint or pointing stick emulates a mouse wheel.
To make it work reliably, the following options are set by default:
Commenting these three options with the '#' character at the beginning of the lines in /etc/X11/xorg.conf.d/11-evdev.conf
will restore the upstream defaults to have a real middle button and the scrollwheel emulation disabled again.
Some partners need to still run 32-bit applications in a 32-bit runtime environment on SUSE Linux Enterprise 12.
SUSE does not support 32-bit development on SLE 12. 32-bit runtime environments are available with SLE 12. If there is a need to develop 32-bit applications to run in the SLE 12 32-bit runtime environment then use the SLE 11 32-bit development tools to create these applications.
Due to limitations in the legacy x86_64 BIOS implementations, booting from devices larger than 2 TiB is technically not possible using legacy partition tables (DOS MBR).
Since SUSE Linux Enterprise Server 11 SP1 we support installation and boot using uEFI on the x86_64 architecture and certified hardware.
4.2.3 Installation on Native 4KiB Sector Drives (4kn) Supported with UEFI#
For the last 20 years, hard disk with 512 byte sectors have been in use. Since some years there are drives providing a 4KiB sector size internally, but showing 512 byte sectors externally as a backward compatibility layer (512 byte emulation / 512e). These devices are fully supported in SUSE Linux Enterprise.
The installation on native 4KiB sector drives (4kn) in x86_64 systems with UEFI is supported, as is the use of 4 KiB sector drives as non-boot disks. Legacy (non UEFI) installations on x86_64 systems are not supported on 4KiB drives for technical reasons.
The Performance Scaled Messaging (PSM) API is Intel's low-level user-level communications interface for the Intel(R) True Scale Fabric family of products.
The PSM libraries are included in the libpsm_infinipath1-3.2-4.30.x86_64 source RPM and get built and installed as part of a default OS install process.
The primary way to use PSM is by compiling applications with an MPI that has been built to use the PSM layer as its interface to Intel HCAs. PSM is the high-performance interface to the Intel(R) True Scale HCAs.
To implement the Intel True Scale solution here are the steps that need to be followed if a user wants to install SUSE Linux Enterprise Server OFED support and MPIs:
Install SUSE Linux Enterprise Server 12 with InfiniBand and Scientific support. Here InfiniBand support installs the OFED stack, which includes the driver for Intel HCAs. Scientific support installs all the MPIs and related tests.
Verify the following RPMs are installed: infinipath-psm-*, mpitests-*, mpitests-mvapich2-*, mpitests-mvapich2-psm-*, mpitests-openmpi-*, mvapich2-*, mvapich2-psm-*, openmpi-*.
Install libipathverbs-rdmav2 RPM. This RPM is part of the ISO but does not get installed in step #1.
Reboot.
Load following modules: ib_uverbs, ib_umad, ib_ucm.
Verify, that ibv_devinfo shows the IB ports up.
SUSE Linux Enterprise Server 12 add support for the latest Intel processors, Including
Intel Xeon processor E5-2600 v3 product family
Intel Xeon processor E5-1600 v3 product family
Intel Xeon processor E5-2400 v3 product family
Intel Xeon processor E5-4600 v3 product family
Intel Xeon processor E7-8800 v3 product family
Intel Xeon processor E7-4800 v3 product family
Intel Atom processor C2000 product family
virt-top
is a top-like utility for showing stats of virtualized domains. Many keys and command line options are the same as for ordinary top
.
4.4.1 Support of 'Movable Memory' in NUMA Systems#
Memory that exists since boot is always managed by the NUMA zone ZONE_NORMAL. This memory has kernel memory, thus cannot be offlined, and subsequently cannot be hot-removed. One solution for this issue is to gather kernel memory on a special system board, and movable memory to other system boards.
To achieve this behaviour, use the kernel commandline option movable_node
. If this boot option is set, Linux checks the hot-pluggable bit of Memory Structure Affinity in the ACPI SRAT Table; if this bit is enabled, the memory is managed by ZONE_MOVABLE, and thus the other system boards can be hot-removed.
CAVEAT: this boot option may have significant performance impact. Workloads that are very metadata intensive may not be able to use all memory because the bulk of memory is ZONE_MOVABLE. They will either suffer severely degraded performance or at the worst case, the OOM killer will fire. Similarly, workloads that require large amounts of address space may fail because they cannot allocate page tables. On NUMA machines, such workloads may still suffer degraded performance because all their page table pages are allocated remote to the workload.
Enabling the feature will also limit the availability of system memory for certain features, eg. tmpfs may only be using memory from ZONE_NORMAL and memory in ZONE_MOVABLE will be unavailable.
Summarizing, by enabling movable_node
there is a trade-off between being able to hot-remove a full memory node versus workload performance, amount of memory that can be used and ability to even run a specific task. If you encounters one of the trade-offs, the only sensible option is to disable node memory hot-remove.
SUSE's Kernel team is working with the Linux community to find mitigations for those limitations as a long term goal.
4.4.2 Allow the BIOS to Keep Control of Error Correction Reporting and Threshold#
In theory, platform firmware has better knowledge of the appropriate thresholds to use based on OEM knowledge of the failure rates of components in the platform.
SLES 12 kernel supports firmware first mode for corrected errors allowing firmware to take first control over memory error handling. Firmware then notifies Linux through APEI once memory errors exceed a platform defined threshold. On receipt of APEI notification, Linux immediately offlines pages in-kernel isolating problematic memory resulting in improved system reliability and uptime.
Due to a dependency of BTRFS in determining the smallest blocksize usable on the CPU architecture's page size, a Btrfs filesystem instance created on x86 (and other architectures with a different page size) will not be mountable and readable via Btrfs on POWER.
Huge Page Memory (16GB pages, enabled via HMC) is supported by the Linux kernel, but special kernel parameters must be used to enable this support. Boot with the parameters hugepagesz=16G hugepages=N
in order to use the 16GB huge pages, where N is the number of 16GB pages assigned to the partition via the HMC. The number of 16GB huge pages available can not be changed once the partition is booted
Systems may exhibit hardware or microcode issues which require inventory collection of vital product data (VPD), platform error reporting and handling, and responding to EPOW events to analyze the root cause and to plan for the right corrective response by service.
The basic POWER platform related tools and packages like ppc64-diag
, servicelog
, and lsvpd
are included to provide allow for faster resolution.
SLES 12 is built for and runs in little endian mode on the POWER8 architecture. SLES 12 little endian provides user-space support only for POWER8 64-bit applications. Compile-time applications built with SLES 10 or SLES 11 with big endian Power will not be executable in the little endian based SLES 12.
The IBM PowerLinux development tools repository provides quick access to the latest IBM SDK and Advance Toolchain versions.
The IBM PowerLinux DLPAR tools repository provides quick access to latest service and productivity tools available from IBM.
5.7 mlx5 Driver Supports New Mellanox Connect-IB Adapter for Power#
Support for Mellanox Connect-IB adapter for Power is available.
Make sure the Firmware level of the Connect-IB adapter is version 10.10.2000 or greater.
5.8 Architecture Level Set (ALS): SLES 12 supports only POWER8 and greater processors#
Support of native Little-endian memory model
Support of Hardware Transaction Memory on POWER8 and later systems
Support of additional PowerISA-2.07 instructions including: 128-bit atomic memory update, enhanced vector crypto (AES, GCM, SHA) and check-sum operations, enhanced vector integer operations for 32, 64, 128-bits, and enhanced Vector Scalar capability with single precision float from 32 to 64 scalar float registers.
SLES 12 includes enabling for usage of the generic CPUidle framework to enter SNOOZE and NAP idle states for POWER systems.
5.10 Platform Resource Reassignment Notification Handling#
The POWER platform provides the capability to dynamically assign CPU, memory, and slot resources to partitions on a system. Over time this can result in a less than optimal resource assignment to each of the individual partitions on the system for CPUs and memory.
The POWER hypervisor provides the capability to transparently re-assign the resources allocated to a partition to provide a more optimized layout of resources. This update is communicated to the partitions via the RTAS event-scan mechanism and PRRN RTAS events. The update provided here allows Linux to dynamically update any resources affected by PRRN notifications.
This feature adds support for the new DAWR interface, which allows hardware watchpoints with longer ranges (up to 512 bytes wide), as well as the ability to disassemble POWER8 instructions.
SLES 12 builds GCC such that it defaults to generating POWER8-optimized binaries (that is, using -mcpu=power8 -mtune=power8
).
SLES 12 builds GLIBC with POWER8 tuning that includes optimized algorithms. With the new GNU IFUNC mechanism, future algorithms will not require an application rebuild using different compiler flags to select a different GLIBC.
SUSE Linux Enterprise 12 is now capable to support up to 64 TB of memory on Power platforms.
With SUSE Linux Enterprise Server 12 Mesa supports 3D acceleration via llvmpipe on IBM POWER hardware.
5.16 Support for the IBM POWER8 Processor added to Valgrind#
All new POWER8 instructions are supported by all the Valgrind tools.
5.17 Support for the IBM POWER8 Processor Added to oprofile, libpfm, and PAPI#
A subset (approximately 150) of the native POWER8 events are supported in this version of these packages. The rest of the events will be added in a future release.
For more information, see http://www.ibm.com/developerworks/linux/linux390/documentation_novell_suse.html
IBM zEnterprise 196 (z196) and IBM zEnterprise 114 (z114) further on referred to as z196 and z114.
For the initial installation of System z operation systems different options have been available in the past. Over time, using Linux and z/VM altogether has become the dominant use case, while using Linux in an LPAR is still present but less frequent. With additional features of the HMC microcode to allow IPL from DVD images, IPL from tape has declined in use.
Starting with SLES 12, IPL from tape for System z is no longer supported.
Newer machines have new instructions and better instruction schedulers.
The default for the system Compiler is to generate code for z196 and do the scheduling for zEC12.
Often it is difficult to correlate the commonly used name of the processor with the IBM model number for that processor.
The cputype command is now included in the s390-tools package. The cputype command prints both the IBM model number as well as the more commonly used processor name.
The Physical Channel ID (PCHID) enables hardware detection with a machine-wide unique identifier.
6.3.3 Add robustness against missing interrupts to non-path-grouped internal I/O requests#
With this feature for the common I/O layer, channel paths with missing interrupts during internal I/O do not make devices with remaining functioning channel paths unusable to device drivers.
6.4.1 Support of Live Guest Relocation (LGR) with z/VM 6.2#
Live guest relocation (LGR) with z/VM 6.2 requires z/VM service applied, especially with Collaborative Memory Management (CMMA) active (cmma=on).
Apply z/VM APAR VM65134.
Large Page support allows processes to allocate process memory in chunks of 1 MiB instead of 4 KiB. This works through the hugetlbfs.
6.4.3 Linux Guests Running on z/VM 5.4 and 6.1 Require z/VM Service Applied#
Linux guests using dedicated devices may experience a loop, if an available path to the device goes offline prior to the IPL of Linux.
Apply recommended z/VM service APARs VM65017 and VM64847.
Unlike in SLES 11, LUN scanning is enabled by default in SLES 12. Instead of having a user-maintained whitelist of FibreChannel/SCSI disks that are brought online to the guest, the system now polls all targets on a fabric. This is especially helpful on systems with hundreds of zFCP disks and exclusive zoning.
However, on systems with few disks and an open fabric, it can lead to long boot times or access to inappropriate disks. It can also lead to difficulties offlining and removing disks.
To disable LUN scanning, set the boot parameter zfcp.allow_lun_scan=0
.
For LUN Scanning to work properly, the minimum storage firmware levels are:
DS8000 Code Bundle Level 64.0.175.0
DS6000 Code Bundle Level 6.2.2.108
6.5.2 New Partition Types Added to the fdasd Command#
In SLES 11 SP2 new partition types were added to the fdasd command in the s390-tools package. Anyone using YaST in SLES 12 to create partitions will not see this happening.
If fdasd
is used from the command line, it will work as documented and desired.
6.5.3 Enabled FCP Hardware Data Router as Default#
To gain the performance improvements for certain workloads the data router support is enabled by default. This is tolerated in environments without data router support on LPAR and under z/VM.
6.5.4 QSAM Access Method for Data sharing with z/OS - Stage 1#
This feature introduces a new interface that enables Linux applications like Data Stage to access and process (read only) data in z/OS owned physical sequential data sets without interfering with z/OS. By avoiding FTP or NFS transfer of data from z/OS the turnaround time for batch processing is significantly reduced.
With increasing sizes of ECKD DASDs, the speed of formatting becomes an issue.
The performance can now be increased by using Parallel Access Volume (PAV) when formatting DASDs. In addition, with a new parameter, -r
or --requestsize
, you can specify the number of cylinders to be formatted in one step.
Introduces the default MTU size of 1500 for OSA layer 2 and all traffic that uses DIX frames to remove System z specifics.
Adds support for IPv6 addresses to the src_vipa tool
, that only supports IPv4 up to now.
Token Ring is no longer supported.
It is no longer offered to select Token Ring.
Only the Ethernet remains as the interface. Therefore the parameter 'OsaMedium' is also obsolete. This parameter is no longer recognized in parmfiles and on the command line. If the parameter is specified, it will appear in /etc/zipl.conf
as an unnecessary kernel parameter.
6.7.1 Existing Data Execution Protection Removed for System z#
The existing data execution protection for Linux on System z relies on the System z hardware to distinguish instructions and data through the secondary memory space mode. As of System z10, new load-relative-long instructions do not make this distinction. As a consequence, applications that have been compiled for System z10 or later fail when running with the existing data execution protection.
Therefore, data execution protection for Linux on System z has been removed.
6.7.2 Install Pattern for HW Crypto Stack on System z in YaST#
It is possible to install the complete crypto stack by selecting System z HW crypto support at install time. It is available as an install pattern
in the Software Selection dialog.
6.7.3 New libica APIs for Supported Crypto Modes Including Hardware or Software Indication#
New libica APIs show crypto exploiters what cryptographic functions are available and if hardware or software will be used to process cryptographic requests. In the past, this information could only be obtained through stand-alone tools primarily intended for administrators.
Exploitation of Enterprise wide PKCS#11 (EP11) in CryptoExpress4 device driver and opencryptoki token for access to the Enterprise PKCS#11 (EP11) features of the CEX4S crypto adapter that implements certified PKCS#11 mechanism.
6.7.5 CPACF MSA4 Extensions for opencryptoki and libica (Part 2)#
The libica
and opencryptoki
libraries now support algorithms for Message Security Assist Extension 4 (MSA4) instructions in CPACF (Central Processor Assist for Cryptographic Function) and provides significant acceleration of complex cryptographic algorithms.
6.7.6 Support of SHA-256 Hash Algorithm in opencryptoki ICA Token#
SLES 12 includes opencryptoki
3.1 which comes with an ICA token that exploits the SHA-256 hash algorithm provided by System z crypto-hardware.
To ease installation with huge amounts of devices SLES 12 enables the user to switch cio_ignore on
and off
at install time to decide if all devices should be displayed or be limited to a specified amount of devices. When selecting cio_ignore on
the list of devices to be displayed can be manually defined by selecting dedicated devices or by defining device ranges.
The SCSI dump tool now writes dumps directly to a SCSI partition, without using a file system. The option zipl -d
is used for both, DASD and SCSI stand-alone dumps. The option zipl -D
has become obsolete and is no longer supported.
The SCSI dump tool can now also be installed on multipath devices.
6.8.3 Keywords for IPL and Console Devices for Use in cio_ignore for IBM System z#
Enable the use of keywords, ipldev
for the IPL device and condev
for the console device to ease installation when a system uses cio_ignore to blacklist all devices at install time and does not have a default CCW console device number, has no devices other than the IPL device as a base to clone Linux guests, or with ramdisk based installations with no devices other than the CCW console.
zPXE provide a similar function to the PXE boot on x86/x86-64: have a parameter driven executable, retrieving installation source and instance specific parameters from specified network location, download automatically the respective kernel, initrd, and parameter files for that instance and start an automated (or manual) installation.
Compression is used in various of places of the system such as decompression of Java class files, PDF generation, compressed backup and for installation (binaries compressed in RPMs). Although beneficial in multiple aspects (data on disk, data transferred to memory), it requires significant processor resources.
Using an optimized version of zlib significantly improves performance and lowers processor resource consumption for applications using the respective library.
With SUSE Linux Enterprise Server 12, Mesa supports 3D acceleration via llvmpipe on IBM System z hardware.
6.9.3 Write Protection-based Dirty Page Detection for System z#
With SLES 12, the System z unique detection of dirty bits is converted to the handling of dirty bits in the page table entry. This change makes the dirty bit handling consistent with other architectures.
Provides performance improvement for applications that access large amounts of anonymous memory, such as heap space for Java programs or caching areas for databases.
6.9.5 Toolchain Support of IBM System z Enterprise EC12 (zEC12)#
The toolchain supports the IBM System z Enterprise EC12 (zEC12) instruction set to optimize performance with SLES 12 on zEC12.
6.9.6 STT_GNU_IFUNC Back-end Support and Exploitation for System z#
Support of the GNU specific symbol STT_GNU_IFUNC that allows for multiple versions of the same function in a library. The support enables optimized execution of performance critical functions, for example by providing variants exploiting the instruction sets of different CPU levels.
The STT_GNU_IFUNC symbol is supported in the gcc, binutils, and glibc packages. Optimized versions of the memcpy, memset, and memcmp functions are provided in the glibc package.
With SLES 12, gcc supports applications utilizing transactional-execution (TX) for simplified concurrency control via shared memory sections removing the limits for lock controlled execution.
7.1.1 Myricom 10-Gigabit Ethernet Driver and Firmware#
SUSE Linux Enterprise 12 (x86_64) is using the Myri10GE driver from mainline Linux kernel. The driver requires a firmware file to be present, which is not being delivered with SUSE Linux Enterprise 12.
Download the required firmware at http://www.myricom.com (http://www.myricom.com).
7.1.2 Updating Firmware for QLogic 82XX Based CNA#
For QLogic 82XX based CNA, update the firmware to the latest from the QLogic Web site or whatever is recommended by the OEM in case you are running 4.7.x FW version.
7.2.1 Brocade FCoE Switch with Firmware Older Than v6.4.3 Does Not Accept Fabric Logins From Initiator#
Once link is up, LLDP query QoS to get the new PFC, send FCoE incapable right away, which is right.
After negotiating with neighbor, we got lldp frame with un-recognized ieee dcbx, so we declare link is CEE incapable, and send out FCoE Capable event with PFC = 0 to fcoe kernel.
Then neighbor adjusts its version to match our CEE version, now we find right DCBX tlv in incoming LLDP frame, we declare link CEE capable. At this time we did not send FCoE capable again since we already sent it in step 2.
To solve this, upgrade the switch firmware to v6.4.3 or above.
Older HP Smart Array controllers (before ProLiant G6) are no longer supported in SLES 12. The device driver for these controllers, cciss
, has been removed from the distribution media and support for G6 controllers has been added to the newer HP Smart Array driver, hpsa
. All Smart Array controllers will now use the hpsa
driver, with the exception of the Dynamic Smart Array controllers, which require a special driver available from http://hp.com (http://hp.com).
HP Smart Array controller models no longer supported in SLES 12 include:
P400,
P400i,
P800,
E200,
E200i,
P700m,
6400,
641,
642,
6i
HP Smart Array controller models that will be supported with the hpsa
driver instead of the cciss driver include:
P212,
P410,
P410i,
P411,
P711m,
P712m,
P812
Important: Customers with controller models identified above as transitioning from the cciss
driver to the hpsa driver should be aware that there are differences in device presentation between cciss
and hpsa
. The older cciss driver is a block layer driver, presenting devices as /dev/cciss/c*d*
, while the newer hpsa driver is a SCSI layer driver, and follows standard SCSI practice of presenting devices as /dev/sd*
. Configuration files that use device-specific naming may need to be edited to reflect the new device naming. For more information about hpsa and a comparison of the two drivers, see the following white paper:
http://h10032.www1.hp.com/ctg/Manual/c02677069.pdf (http://h10032.www1.hp.com/ctg/Manual/c02677069.pdf).
For IMSM and DDF RAIDs the mdadm driver is used unconditionally.
The overlays are not compiled in slapd anymore but they can be loaded as modules at runtime.
New installation. Before activating an overlay for a database the module must be loaded in the global section. This can be done by the
'moduleload'
option.Update from SLE11. If you have activated ldap overlays you must load them in the slapd configuration; otherwise the ldap server cannot be started.
New installation:
If you are using slapd.conf
insert 'moduleload <module name>'
into the global section.
If you are using the config back-end do the following steps:
1) Create 'cn=Module' child entry:
2) Define the modules that must be loaded. For example, to load memberof
and accesslog
execute
Update:
If you are using slapd.conf
insert 'moduleload <module name>'
into the global section.
If you are using the config back-end do the following steps:
1) Create a ldif file for slapcat to load the needed modules. For example, if you are using the memeberof
and accesslog
overlays this is the right settings:
2) Add this child to the slapd configuration:
8.2.1 iscsitarget and Related Packages Replaced with lio#
iscsitarget
and related packages are replaced with lio
.
On SLES 12, suseRegister
was replaced by the SUSEConnect
command line tool. For usage information, see the following TID: https://www.suse.com/support/kb/doc.php?id=7016626 (https://www.suse.com/support/kb/doc.php?id=7016626)
LTTng
provides a set of tools allowing for efficient and combined tracing of userspace and kernel code referencing a common time source. This allows users to identify performance issues and debug problems in complicated code involving multiple concurrent processes and threads. In addition to the tracers, viewing and analysis tools are provided supporting both text and graphical formats. The kernel tracing functionality is implemented via a suite of loadable kernel modules. The loading of these modules and control of the tracing system is controlled by a single lttng
utility.
8.2.4 Adding the 'dropwatch' Package and Enabling NET_DROP_MONITOR#
The dropwatch feature will allow the customer to easily observe and diagnose network performance problems caused by dropped packets.
Thus the 'dropwatch' package was added and NET_DROP_MONITOR enabled.
The default FTP client is lftp
, which offers outstanding scriptability. Other clients such as ncftp
and lukemftp
are no longer available.
For specific configurations, such as low memory, where the GNOME desktop environment does not suit, a lightweight desktop is needed.
icewm has been chosen as a lightweight desktop to fill this need on SUSE Linux Enterprise Server.
The tar
version in SLES and SLED 12 (SP0) was not handling extended attributes properly.
A maintenance update for tar
fixes this issue. This update introduces new package dependencies:
libacl1
libselinux1
Both of these packages are already required by other core packages in a SLE installation.
8.3.2 Upgrading PostgreSQL Installations from 9.1 to 9.4#
To upgrade a PostgreSQL server installation from version 9.1 to 9.4, the database files need to be converted to the new version.
Newer versions of PostgreSQL come with the pg_upgrade
tool that simplifies and speeds up the migration of a PostgreSQL installation to a new version. Formerly, it was necessary to dump and restore the database files which was much slower.
To work, pg_upgrade
needs to have the server binaries of both versions available. To allow this, we had to change the way PostgreSQL is packaged as well as the naming of the packages, so that two or more versions of PostgreSQL can be installed in parallel.
Starting with version 9.1, PostgreSQL package names on SUSE Linux Enterprise products contain numbers indicating the major version. In PostgreSQL terms, the major version consists of the first two components of the version number, for example, 9.1, 9.3, and 9.4. So, the packages for PostgreSQL 9.3 are named postgresql93
, postgresql93-server
, etc. Inside the packages, the files were moved from their standard location to a versioned location such as /usr/lib/postgresql93/bin
or /usr/lib/postgresql94/bin
. This avoids file conflicts if multiple packages are installed in parallel. The update-alternatives
mechanism creates and maintains symbolic links that cause one version (by default the highest installed version) to re-appear in the standard locations. By default, database data is stored under /var/lib/pgsql/data
on SUSE Linux Enterprise.
The following preconditions have to be fulfilled before data migration can be started:
If not already done, the packages of the old PostgreSQL version (9.3) must be upgraded to the latest release through a maintenance update.
The packages of the new PostgreSQL major version need to be installed. For SLE 12, this means installing
postgresql94-server
and all the packages it depends on. Becausepg_upgrade
is contained in the packagepostgresql94-contrib
, this package must be installed as well, at least until the migration is done.Unless
pg_upgrade
is used in link mode, the server must have enough free disk space to temporarily hold a copy of the database files. If the database instance was installed in the default location, the needed space in megabytes can be determined by running the following command as root:du -hs /var/lib/pgsql/data
. If space is tight, it might help to run theVACUUM FULL
SQL command on each database in the PostgreSQL instance to be migrated which might take very long.
Upstream documentation about pg_upgrade
including step-by-step instructions for performing a database migration can be found under file:///usr/share/doc/packages/postgresql94/html/pgupgrade.html
(if the postgresql94-docs package is installed), or online under http://www.postgresql.org/docs/9.4/static/pgupgrade.html (http://www.postgresql.org/docs/9.4/static/pgupgrade.html). NOTE: The online documentation explains how you can install PostgreSQL from the upstream sources (which is not necessary on SLE) and also uses other directory names ( /usr/local
instead of the update-alternatives
based path as described above).
For background information about the inner workings of pg_admin
and a performance comparison with the old dump and restore method, see http://momjian.us/main/writings/pgsql/pg_upgrade.pdf (http://momjian.us/main/writings/pgsql/pg_upgrade.pdf).
Mounting cifs shares at systems start via /etc/samba/smbfstab
has been discontinued and obsoleted. Now the generic /etc/fstab
handles it.
The migration process requires two steps:
Append all your mount points from
/etc/samba/smbfstab.rpmsave
to/etc/fstab
.Add '0 0' (without quotes) to the end of each new cifs mount line in
/etc/fstab
.
With Apache 2.4, some changes have been introduced that affect Apache's access control scheme. Previously, the directives 'Allow', 'Deny', and 'Order' have determined if access to a resource has been granted with Apache 2.2. With 2.4, these directives have been replaced by the 'Require' directive. For backwards compatibility of 2.2 configurations, the SUSE Linux Enterprise Server 12 apache2 package understands both schemes, Deny/Allow (apache 2.2) and Require (apache 2.4).
For more information on how to easily switch between the two schemes, see the file /usr/share/doc/packages/apache2/README-access_compat.txt
.
8.3.5 Samba: Changing 'winbind expand groups' to '0'#
Forthcoming Samba 4.2.0 provided by http://www.samba.org (http://www.samba.org) will come with 'winbind expand groups' set to '0' by default.
Samba post 4.1.10 provided by SUSE anticipates the new default.
The new default makes winbindd more reliable because it does not require SAMR access to domain controllers of trusted domains.
Note: Some legacy applications calculate the group memberships of users by traversing groups; such applications will require winbind expand groups = 1
.
We ship GNOME 3.10 with SUSE Linux Enterprise 12.
GNOME on SUSE Linux Enterprise is available in three different setups, which are modifying desktop user experience:
SLE Classic: this setup uses a single bottom panel, similar to GNOME desktop as available on SUSE Linux Enterprise 11. This setup is default on SUSE Linux Enterprise 12.
GNOME: this is GNOME 3 upstream user experience, also sometime called 'GNOME Shell'. This setup might be more adequate with touchscreen.
GNOME Classic: this setup uses two panels (one top panel, one bottom panel) similar to upstream GNOME 2 desktop
The setup can be changed at login time, in GDM, using the gear icon in the password prompt screen. It can also be modified using YaST, systemwide.
Caveats:
With SLE 11 after joining a Microsoft domain, GDM displayed the available domain names as a drop-down box below the user name and password fields. This behavior has changed.
With SLE 12, you must prefix the domain and the winbind separator manually to login. As soon as you click the 'Not listed?' text, GDM will display a hint such as '(e.g., domainuser)'.
IBM Java 7 Release 1 provides performance improvements through IBM POWER8 and IBM zEnterprise EC12 exploitation.
Parted was upgraded version 3.1.
This version can no longer resize file systems contained within a partition. Parted can resize partitions, but to resize the contained file system, an external tool such as mkfs.ext4 has to be used.
With the upgrade to Qt5 the QML technology now also available.
8.3.10 supportconfig Output Contains dmidecode Information by Default#
On platforms supporting dmidecode, the supportconfig
tool now contains the dmidecode
output.
Previously, this was done only when explicitly activated with a parameter, but the default changed to provide always now. This is done to deliver better support result.
BlueZ 4 is no longer maintained upstream. Thus upgrading to BlueZ 5 ensures that you will get all the latest upstream bug fixes and enhancements.
BlueZ 5 comes with numerous new features, API simplification and other improvements such as Low Energy support. It is new major version of the Bluetooth handling daemon and utilities.
Note: The new major version indicates that the API is not backwards compatible with BlueZ 4, which means that all applications, agents, etc. must be updated.
A Machine Owner Key (MOK) is a type of key that a user generates and uses to sign an EFI binary. This is a way for the machine owner to have ownership over the platform’s boot process.
Suitable tools are coming with the mokutil
package.
MariaDB is a backward compatible replacement for mySQL.
If you update from SLE 11 to SLE 12, it is advisable to do a manual backup before the system update. This could help if a start of the database has issues with the storage engine's on-disk layout.
After the update to SLE 12, a manual step is required to actually get the database running (this way you quickly see if something goes wrong):
The old PCMCIA based on ISA and 16-bit only will no more be supported under SLE12. Latest modern laptop uses CardBus (based on PCI), which continues to be supported.
The Legacy Module helps you migrating applications from SUSE Linux Enterprise 10 and 11 and other systems to SUSE Linux Enterprise 12, by providing packages which are discontinued on SUSE Linux Enterprise Server, such as: sendmail
, syslog-ng
, IBM Java6
, and a number of libraries (for example openssl-0.9.8
).
Access to the Legacy Module is included in your SUSE Linux Enterprise Server subscription. The module has a different lifecycle than SUSE Linux Enterprise Server itself. Packages in this module are usually supported for at most three years. Support for Sendmail and IBM Java 6 will end in September 2017 at the latest.
8.4.3 Command Line Interface for Managing Packages#
YaST as a command line tool for managing packages is deprecated. Instead of yast
with the command line switches -i
, --install
, --update
, or --remove
for installing, updating, or removing packages, use zypper
.
For more information, see the zypper
man page.
libsysfs
has been deprecated and has been replaced by libudev. If you have self-compiled applications using libsysfs
previously, you have to recompile using libudev
.
dhcpcd
package was replaced by wicked and dhcp-client
packages.
8.4.6 Hardware Deprecation Associated with a Emulex lpfc Driver Update#
Emulex will be deprecating hardware in the future that will impact SLES 12 SP1.
The following hardware associated with the Emulex lpfc driver will be deprecated for SLES 12 SP1:
Device IDs and Device Name
Raw devices are deprecated.
8.4.8 Packages Removed with SUSE Linux Enterprise Server 12#
The following packages were removed with the major release of SUSE Linux Enterprise Server 12:
The System Analysis and Tuning Guide coming with SLES 12 still mentioned the pm-profiler package, which no longer existed in SUSE Linux Enterprise 12.
The 'pm-profiler' package does not exist anymore in SLES 12 and is replaced with the package 'tuned'.
The SUSE SAM is removed from the media.
Inn
tool will not be available in SLE12.
scsirastools
was designed to work with now obsolete SCSI parallel enclosure. This package is not more available in SLE12.
As announced on SLE 11, LPRng
is discontinued with SLE 12.
The following X11 drivers are no longer provided in SLE 12:
xf86-video-ark
xf86-video-chips
xf86-video-geode
xf86-video-glint
xf86-video-i128
xf86-video-neomagic
xf86-video-newport
xf86-video-r128
xf86-video-savage
xf86-video-siliconmotion
xf86-video-tdfx
xf86-video-tga
xf86-video-trident
xf86-video-voodoo
xf86-video-sis
xf86-video-sisusb
xf86-video-openchrome
xf86-video-unichrome
xf86-video-mach64
8.4.8.7 Nagios Server Now Part of a SUSE Manager Subscription#
Support for Icinga (a successor of Nagios) will not be part of the SUSE Linux Enterprise Server 12 subscription.
Fully supported Icinga packages for SUSE Linux Enterprise Server 12 will be available as part of a SUSE Manager subscription. In the SUSE Manager context we will be able to deliver better integration into the monitoring frameworks.
More frequent updates on the monitoring server parts than in the past are planned.
The package python-ordereddict
is not available on SLES 12 anymore.
On SLES 12, the class collections.OrderedDict
is provided by both the python and python3
packages.
GRUB2 is now available on all SUSE Linux Enterprise 12 architectures and is the only supported bootloader. Other bootloaders that were supported in SLE 11, have been removed from the distribution and are not available anymore.
8.4.8.10 YaST Modules Dropped Starting with SUSE Linux Enterprise 12#
The following YaST modules or obsolete features of modules are not available in the SUSE Linux Enterprise 12 code base anymore:
yast2-phone-services
yast2-repair
yast2-network: DSL configuration
yast2-network: ISDN configuration
yast2-network: modem support
yast2-backup and yast2-restore
yast2-apparmor: incident reporting tools
yast2-apparmor: profile generating tools
yast2-*creator (moved to SDK)
YaST installation into directory
yast2-x11
yast2-mouse
yast2-irda (IrDA)
YaST Boot and Installation server modules
yast2-fingerprint-reader
yast2-profile-manager
8.4.8.11 Mono Platform and Programs No Longer Provided#
Starting with SLE 12, the Mono platform and Mono based programs are no longer supported.
These are the replacement applications:
gnote (instead of Tomboy)
shotwell (instead of F-Spot)
rhythmbox (instead of Banshee)
8.4.8.12 YaST No Longer Supports Configuring Modem Devices#
YaST ( yast2-network
) no longer offers modem configuration dialogs.
It is still possible to configure modems manually.
8.4.8.13 YaST No Longer Supports Configuring ISDN Devices#
YaST ( yast2-network
) no longer supports configuring ISDN devices. If needed, NetworkManager supports such devices.
8.4.8.14 YaST No Longer Supports Configuring DSL Devices#
YaST ( yast2-network
) no longer supports configuring DSL devices. If needed, NetworkManager supports such devices (e.g., DSL cable modems).
8.4.9 Packages Removed with SUSE Linux Enterprise Server 12 SP1#
The following packages to be removed with the release of SUSE Linux Enterprise Server 12 SP1:
8.4.10 Packages and Features to Be Removed in the Future#
SLE 12 features the Qt4 toolkit. Qt4 will be supported at least until the release of SLE 12 Service Pack 3. Hence it is recommended to migrate applications to Qt5 and start new projects using Qt5.
8.4.10.2 Use /etc/os-release Instead of /etc/SuSE-release#
Starting with SLE 12, /etc/SuSE-release file is deprecated. It should not be used to identify a SUSE Linux Enterprise system. This file will be removed in a future Service Pack or release.
The file /etc/os-release
now is decisive. This file is a cross-distribution standard to identify a Linux system. For more information about the syntax, see the os-release man page ( man os-release
).
8.4.10.3 Deprecate DMSVSMA for snIPL for SLES 12 SP1#
The RPC protocol is used with old z/VM versions only that are going out of service.
The support of remote access for snIPL to z/VM hosts via the RPC protocol is being deprecated starting with SLES 12 SP1. It is recommended to use remote access to z/VM hosts via SMAPI, provided by supported z/VM 5.4, and z/VM 6.x versions. For details about setting up your z/VM system for API access see z/VM Systems Management Application Programming, SC24-6234.
The sendmail package is deprecated and will be discontinued with one of the next service packs. Consider to use Postfix as a replacement.
8.5.1 Wireless Drivers Moved to kernel-default-extra#
The following wireless drivers have been moved to kernel-default-extra package:
atmel_cs
atmel_pci
iwl3945
iwlagn
ipw2100
ipw2200
p54pci
p54usb
prism54
orinoco
orinoco_cs
orinoco_nortel
orinoco_pci
orinoco_plx
orinoco_tmd
module-init-tools is replaced by kmod.
Caveat: With the replacement, the modprobe list command ( -l
) is no longer available. As a workaround you can make use of find
or grep
; for example, if you are looking for modules starting with xt
:
AppArmor now offers normalized command names:
aa-notify
instead ofaa-apparmor_notify
orapparmor_notify
aa-status
instead ofaa-apparmor_status
(apparmor_status
is still supported)
8.5.4 Legacy module-init-tools Replaced with kmod#
Kmod
package is a replacement of the former module-init-tools
. In addition to the well known tools like lsmod
, modprobe
, and modinfo
, the package offers a shared library for use by system management services which need to query and manipulate Linux kernel modules.
8.5.5 NetworkManager Part of the Workstation Extension#
NetworkManager, primarily used on Desktops and Notebooks where one user is working with one specific machine, is now part of the Workstation Extension. For all the other use cases, and especially all server workloads, the default provided by SLES is Wicked.
SLES 12 does not offer the cyrus-imapd package and hence Cyrus IMAP and POP Mail Server is not available on SLES 12.
Users should consider a migration to Dovecot
. SLES 12 does not provide utilities for the migration however there are some community tools: http://wiki2.dovecot.org/Migration/Cyrus
Configuration:
There is no yast support for dovecot configuration. If you want to deliver local mails to dovecot follow this steps:
Set MAIL_CREATE_CONFIG to 'no' in /etc/sysconfig/mail to prohibit yast2 to override your configuration.
Set mailbox_command = /usr/lib/dovecot/dovecot-lda -f '$SENDER' -a '$RECIPIENT' in /etc/postfix/main.cf
Set mail_location = maildir:~/Maildir or to your preferred value in /etc/dovecot/conf.d/10-mail.conf
Set a normal user as alias for root in /etc/aliases. Delivery to the user 'root' is not possible.
Execute following commands:
postalias /etc/aliases
systemctl restart postfix
systemctl enable dovecot
systemctl start dovecot
Autoyast:
The postfix_mda
tag of the mail section may only contains following values: local, procmail
.
8.5.7 Replacing syslog-ng and syslog With rsyslog#
On new installations, rsyslog
will get installed instead of the former syslog-ng
and syslog
.
8.5.8 Printing System: Improvements and Incompatible Changes#
CUPS Version Upgrade to 1.7
CUPS >= 1.6 has major incompatible changes compared to CUPS up to version 1.5.4 in particular when printing via network:
The IPP protocol default version increased from 1.1 to 2.0. Older IPP servers like CUPS 1.3.x (for example in SLE 11) reject IPP 2.0 requests with 'Bad Request' (see http://www.cups.org/str.php?L4231 (http://www.cups.org/str.php?L4231) ). By adding '/version=1.1' to ServerName in client.conf (e.g., ServerName older.server.example.com/version=1.1) or to the CUPS_SERVER environment variable value or by adding it to the server name value of the '-h' option (e.g., lpstat -h older.server.example.com/version=1.1 -p) the older IPP protocol version for older servers must be specified explicitly.
CUPS Browsing is dropped in CUPS but the new package cups-filters provides the cups-browsed that provides basic CUPS Browsing and Polling functionality. The native protocol in CUPS for automatic client discovery of printers is now DNS-SD. Start cups-browsed on the local host to receive traditional CUPS Browsing information from traditional remote CUPS servers. To broadcast traditional CUPS Browsing information into the network so that traditional remote CUPS clients can receive it, set 'BrowseLocalProtocols CUPS' in /etc/cups/cups-browsed.conf
and start cups-browsed.
Some printing filters and back-ends are dropped in CUPS but the new package cups-filters provides them. So cups-filters is usually needed (recommended by RPM) but cups-filters is not strictly required.
The cupsd configuration directives are split into two files: cupsd.conf (can also be modified via HTTP PUT e.g. via cupsctl) and cups-files.conf (can only be modified manually by root) to have better default protection against misuse of privileges by normal users who have been specifically allowed by root to do cupsd configuration changes (see http://www.cups.org/str.php?L4223 (http://www.cups.org/str.php?L4223), CVE-2012-5519, and bnc#789566).
CUPS banners and the CUPS test page are no longer supported since CUPS >= 1.6. The banners and the test page from cups-filters must be used. The CUPS banner files in /usr/share/cups/banners/
and the CUPS testpage /usr/share/cups/data/testprint
(which is also a CUPS banner file type) are no longer provided in the cups RPM because they do no longer work since CUPS >= 1.6 (see http://www.cups.org/str.php?L4120 (http://www.cups.org/str.php?L4120) ) because there is no longer a filter that can convert the CUPS banner files. Since CUPS >= 1.6 only the banner files and testpage in the cups-filters package work via the cups-filters PDF workflow and the cups-filters package also provides the matching bannertopdf filter.
For details, see https://bugzilla.suse.com/show_bug.cgi?id=735404 (https://bugzilla.suse.com/show_bug.cgi?id=735404).
Traditional CUPS version 1.5.4 Provided in the Legacy Module
We provide the last traditional CUPS version 1.5.4 as 'cups154' RPMs in the 'legacy' module. If CUPS version 1.7 does not support particular needs, you can still use CUPS 1.5.4 (under the conditions of the 'legacy' module). This could be important, if you need a traditional CUPS server with original CUPS Browsing features.
For those users any (semi)-automated CUPS version upgrade must be prohibited because CUPS > 1.5.4 has major incompatible changes compared to CUPS <= 1.5.4. Therefore the CUPS 1.5.4 RPM package name contains the version and it conflicts with higher versions. This way we avoid that an installed CUPS 1.5.4 gets accidentally replaced with a higher version. It is not possible to have different CUPS libraries versions installed at the same time.
The API in CUPS 1.7 is compatible with the CUPS 1.5.4 API (existing functions are not changed) but newer CUPS libraries provide some new functions. There could be applications that might use newer CUPS library functions so that such applications would require the current CUPS 1.7 libraries. It is not possible to use CUPS 1.5.4 together with applications that require the current CUPS 1.7 libraries.
PDF Now Common Printing Data Format
There is a general move away from PostScript to PDF as the standard print job format. This change is advocated by the OpenPrinting workgroup of the Linux Foundation and the CUPS author.
This means that application programs usually no longer produce PostScript output by default when printing but instead PDF.
As a consequence the default processing how application programs printing output is converted into the 'language' that the particular printer accepts (the so called 'CUPS filter chain') has fundamentally changed from a PostScript-centric workflow to a PDF-centric workflow.
Accordingly the upstream standard for CUPS under Linux (using CUPS plus the cups-filters package) is now PDF-based job processing, letting every non-PDF input be converted to PDF first, page management options being applied by a pdftopdf filter and Ghostscript being called with PDF as input.
With PDF as the standard print job format traditional PostScript printers can no longer print application's printing output directly so that a conversion step in the printing workflow is required that converts PDF into PostScript. But there are also PostScript+PDF printers that can print both PostScript and PDF directly.
For details, see the section 'Common printing data formats' in the SUSE wiki article 'Concepts printing' at http://en.opensuse.org/Concepts_printing (http://en.opensuse.org/Concepts_printing).
8.5.9 groff: /etc/papersize No Longer Depends on sysconfig Variables#
/etc/papersize
no longer inherits settings from /etc/sysconfig/language
when running SuSEconfig
.
Set /etc/papersize
directly, e.g.:
For details, see man 5 groff_font
('papersize string').
Module Name | Content | Life Cycle |
---|---|---|
Web and Scripting Module | PHP, Python, Ruby on Rails | 3 years, ~18 months overlap |
Legacy Module | Sendmail, old Java, … | 3 years |
Public Cloud Module | Public cloud initialization code and tools | Continuous integration |
Toolchain Module | GCC | Yearly delivery (starts mid 2015) |
Advanced Systems Management Module | cfengine, puppet and the new 'machinery' tool | Continuous integration |
This Module gives you a sneak-peak into our upcoming systems management toolbox which allows you to inspect systems remotely, store their system description and create new systems to deploy them in datacenters and clouds. The toolbox is still in active development and will get regular updates. We welcome feedback!
Access to this module is included in your SUSE Linux Enterprise Server subscription. The module has a different lifecycle than SUSE Linux Enterprise Server itself: as stability of APIs and ABIs is not yet guaranteed, we support this technology only on systems which apply all our updates to this channel in a timely manner.
The package is called machinery, for more information see Machinery Project Website (http://machinery-project.org/).
8.6.2 SUSE Linux Enterprise Public Cloud Module 12#
The Public Cloud Module is a collection of tools that enables you to create and manage cloud images from the commandline on SUSE Linux Enterprise Server. When building your own images with KIWI or SUSE Studio, initialization code specific to the target cloud is included in that image.
Access to the Public Cloud Module is included in your SUSE Linux Enterprise Server subscription. The module has a different lifecycle than SUSE Linux Enterprise Server itself. Packages usually follow the upstream development closely to enable you to take advantage of the most recent development in the public cloud space.
Via this Module you will have access to a more recent GCC and toolchain in addition to the default compiler of SUSE Linux Enterprise.
Access to the Toolchain Module is included in your SUSE Linux Enterprise Server subscription. The module has a different lifecycle than SUSE Linux Enterprise Server itself: once a year, packages in this module will be updated and older versions discontinued accordingly.
8.6.4 SUSE Linux Enterprise Web and Scripting Module 12#
The SUSE Linux Enterprise Web and Scripting Module delivers a comprehensive suite of scripting languages, frameworks, and related tools helping developers and systems administrators accelerate the creation of stable, modern web applications, using dynamic languages, such as PHP, Ruby on Rails, and Python.
Access to the Web and Scripting Module is included in your SUSE Linux Enterprise Server subscription. The module has a different lifecycle than SUSE Linux Enterprise Server itself: Package versions in the this module are usually supported for at most three years. We are planning to release more recent versions in between these three years. The exact dates may differ per package.
This section contains information about system limits, a number of technical changes and enhancements for the experienced user.
When talking about CPUs we are following this terminology:
The visible physical entity, as it is typically mounted to a motherboard or an equivalent.
The (usually not visible) physical entity as reported by the CPU vendor.
Baixar cd jota quest acustico mtv. On System z this is equivalent to an IFL.
This is what the Linux Kernel recognizes as a 'CPU'.
We avoid the word 'thread' (which is sometimes used), as the word 'thread' would also become ambiguous subsequently.
A logical CPU as seen from within a Virtual Machine.
SLES 12 supports the following virtualized network drivers:
Full virtualization: Intel e1000
Full virtualization: Realtek 8139
Paravirtualized: QEMU Virtualized NIC Card: virtio (KVM only)
9.2 Virtualization: Devices Supported for Booting#
SLE12 support VM guest to boot from:
Parallel ATA (PATA/IDE)
Advanced Host Controller Interface (AHCI)
Floppy Disk Drive (FDD)
virtio-blk
virtio-scsi
Preboot eXecution Environment (PXE) rom (for supported Network Interface Cards)
Boot from USB
and PCI pass-through
devices are not supported.
9.3 Virtualization: Supported Disks Formats and Protocols#
The following disk formats support read-write access (RW):
The following disk formats support read-only access (RO):
vmdk
vpc
vhd
/vhdx
The following protocols can be used for read-only access (RO) to images:
When using Xen, the qed
format will not be displayed as a selectable storage in virt-manager
.
This table summarizes the various limits which exist in our recent kernels and utilities (if related) for SUSE Linux Enterprise Server 11.
SLES 12 (3.12) | x86_64 | s390x | ppc64le |
---|---|---|---|
CPU bits | 64 | 64 | 64 |
max. # Logical CPUs | 8192 | 256 | 2048 |
max. RAM (theoretical / certified) | > 1 PiB/64 TiB | 4 TiB/256 GiB | 1 PiB/64 TiB |
max. user-/kernelspace | 128 TiB/128 TiB | φ/φ | 2 TiB/2 EiB |
max. swap space | up to 29 * 64 GB (x86_64) or 30 * 64 GB (other architectures) | ||
max. # processes | 1048576 | ||
max. # threads per process | Maximum limit depends on memory and other parameters (Tested with more than 120000). | ||
max. size per block device | and up to 8 EiB on all 64-bit architectures | ||
FD_SETSIZE | 1024 |
SLES 12 GA Virtual Machine (VM) | Limits |
---|---|
Max VMs per host | unlimited (total number of virtual CPUs in all guests being no greater than 8 times the number of CPU cores in the host) |
Maximum Virtual CPUs per VM | 160 |
Maximum Memory per VM | 4 TiB |
Maximum Virtual Block Devices per VM | 20 virtio-blk, 4 IDE |
Maximum number of Network Card per VM | 8 |
Virtual Host Server (VHS) limits are identical to SUSE Linux Enterprise Server.
9.5.1 Virtualization: Supported Live Migration Scenarios#
The following KVM host operating system combinations will be fully supported (L3) for live migrating guests from one host to another:
VM from a SLES12 host to SLES12 host
VM from a SLES11 SP3 host to SLES12 host
The following KVM host operating system combinations will be fully supported (L3) for live migrating guests from one host to another, later when released:
VM from a SLES12 host to SLES12 SP1 host
VM from a SLES11 SP4 host to SLES 12 host
All guests as outlined in the Virtualization Guide, chapter Supported VM Guests, are supported.
Backward migration is not supported:
VM from a SLE12 host to SLE11 SP3 host
VM from a SLE11 SP3 host to SP2/SP1 host
KVM guests are able to see the new size of their disk after a resize on the host using the virsh blockresize
command.
Since SUSE Linux Enterprise Server 11 SP2, we removed the 32-bit hypervisor as a virtualization host. 32-bit virtual guests are not affected and are fully supported with the provided 64-bit hypervisor.
SLES 12 GA Virtual Machine (VM) | Limits |
---|---|
Maximum VMs per host | 64 |
Maximum Virtual CPUs per VM | 64 |
Maximum Memory per VM | 16 GiB x86_32, 511 GiB x86_64 |
Max Virtual Block Devices per VM | 100 PV, 100 FV with PV drivers, 4 FV (Emulated IDE) |
SLES 12 GA Virtual Host Server (VHS) | Limits |
---|---|
Maximum Physical CPUs | 256 |
Maximum Virtual CPUs | 256 |
Maximum Physical Memory | 5 TiB |
Maximum Dom0 Physical Memory | 500 GiB |
Maximum Block Devices | 12,000 SCSI logical units |
Maximum iSCSI Devices | 128 |
Maximum Network Cards | 8 |
Maximum VMs per CPU Core | 8 |
Maximum VMs per VHS | 64 |
Maximum Virtual Network Cards | 64 across all VMs in the system |
In Xen 4.4, the hypervisor bundled with SUSE Linux Enterprise Server 12, Dom0 is able to see and handle a maximum of 512 logical CPUs. The hypervisor itself, however, can access up to logical 256 logical CPUs and schedule those for the VMs.
For more information about acronyms please refer to the official Virtualization Documentation.
PV: Para Virtualization
FV: Full Virtualization
9.6.1 Migrate VMs from SLE11/xend to SLE12/libxl Using Live Migration#
SLE 10 and SLE 11 use xend to manage guests. SLE 12 uses libxl to manage guests. Live migration from xend to libxl is not implemented, nothing in a libxl based tool stack is able to receive guests from xend. Furthermore, the data format which describes guest configuration differs slightly between xend and libxl.
The same applies to VMs managed by libvirtd because it uses either xend or libxl to manage a VM.
At this point live migration from xend based hosts (SLE 10/SLE 11) to libxl based hosts (SLE12) is not possible. Shutdown the guest on the SLE 11 host and start it again on the SLE 12 host. For more information about this xend/xm
to xl/libxl
, refer to the Official Virtualization Documentation.
SUSE Linux Enterprise was the first enterprise Linux distribution to support journaling file systems and logical volume managers back in 2000. Later we introduced xfs to Linux, which today is seen as the primary work horse for large-scale file systems, systems with heavy load and multiple parallel read- and write-operations. With SUSE Linux Enterprise 12 we are going the next step of innovation and are using the Copy on Write file system btrfs as the default for the operating system, to support system snapshots and rollback.
“+”: supported; “-”: unsupported.
Feature | Btrfs | XFS | Ext4 | Reiserfs | OCFS 2 ** |
---|---|---|---|---|---|
Data/Metadata Journaling | N/A | - / + | - / + | - / + | |
Journal internal/external | N/A | + / + | + / - | ||
Offline extend/shrink | + / + | - / - | + / + | + / - | |
Online extend/shrink | + / + | + / - | + / - | + / - | + / - |
Inode-Allocation-Map | B-tree | B+-tree | table | u. B*-tree | table |
Sparse Files | + | ||||
Tail Packing | + | - | + | - | |
Defrag | + | - | |||
ExtAttr / ACLs | + / + | ||||
Quotas | + | ||||
Dump/Restore | - | + | - | ||
Blocksize default | 4KiB | ||||
max. Filesystemsize [1] | 16 EiB | 8 EiB | 1 EiB | 16 TiB | 4 PiB |
max. Filesize [1] | 16 EiB | 8 EiB | 1 EiB | 1 EiB | 4 PiB |
Support Status | SLE | SLE | SLE | SLE | SLE HA |
* Btrfs is copy-on-write file system. Rather than journaling changes before writing them in-place, it writes them to a new location, then links it in. Until the last write, the new changes are not 'committed'. Due to the nature of the filesystem, quotas are implemented based on subvolumes ('qgroups'). The blocksize default varies with different host architectures. 64KiB is used on ppc64le, 4KiB on most other systems. The actual size used can be checked with the command 'getconf PAGE_SIZE'. | |||||
** OCFS2 is fully supported as part of the SUSE Linux Enterprise High Availability Extension. | |||||
*** Reiserfs is supported for existing filesystems, the creation of new reiserfs file systems is discouraged. |
The maximum file size above can be larger than the file system's actual size due to usage of sparse blocks. Note that unless a file system comes with large file support (LFS), the maximum file size on a 32-bit system is 2 GB (2^31 bytes). Currently all of our standard file systems (including ext3 and ReiserFS) have LFS, which gives a maximum file size of 2^63 bytes in theory. The numbers in the above tables assume that the file systems are using 4 KiB block size. When using different block sizes, the results are different, but 4 KiB reflects the most common standard.
In this document: 1024 Bytes = 1 KiB; 1024 KiB = 1 MiB; 1024 MiB = 1 GiB; 1024 GiB = 1 TiB; 1024 TiB = 1 PiB; 1024 PiB = 1 EiB. See also http://physics.nist.gov/cuu/Units/binary.html.
NFSv4 with IPv6 is only supported for the client side. A NFSv4 server with IPv6 is not supported.
This version of Samba delivers integration with Windows 7 Active Directory Domains. In addition we provide the clustered version of Samba as part of SUSE Linux Enterprise High Availability 11 SP3.
For general information about the file system layout, see the Administration Guide, Chapter Snapper.
Additional Information
/run/media/<user_name>
is now used as top directory for removable media mount points. It replaces /media
which is no longer available.
The directory /run, also sym-linked as /var/run, is mounted as tmpfs and thus not persistent across reboots. Anything stored in this directory will be removed when the machine is shut down.
SUSE makes no representations or warranties with respect to the contents or use of this documentation, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. Further, SUSE reserves the right to revise this publication and to make changes to its content, at any time, without the obligation to notify any person or entity of such revisions or changes.
Further, SUSE makes no representations or warranties with respect to any software, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. Further, SUSE reserves the right to make changes to any and all parts of SUSE software, at any time, without any obligation to notify any person or entity of such changes.
Any products or technical information provided under this Agreement may be subject to U.S. export controls and the trade laws of other countries. You agree to comply with all export control regulations and to obtain any required licenses or classifications to export, re-export, or import deliverables. You agree not to export or re-export to entities on the current U.S. export exclusion lists or to any embargoed or terrorist countries as specified in U.S. export laws. You agree to not use deliverables for prohibited nuclear, missile, or chemical/biological weaponry end uses. Please refer to http://www.suse.com/company/legal/ for more information on exporting SUSE software. SUSE assumes no responsibility for your failure to obtain any necessary export approvals.
Copyright © 2010, 2011, 2012, 2013, 2014, 2015 SUSE LLC. This release notes document is licensed under a Creative Commons Attribution-NoDerivs 3.0 United States License (CC-BY-ND-3.0 US, http://creativecommons.org/licenses/by-nd/3.0/us/.)
SUSE has intellectual property rights relating to technology embodied in the product that is described in this document. In particular, and without limitation, these intellectual property rights may include one or more of the U.S. patents listed at http://www.suse.com/company/legal/ and one or more additional patents or pending patent applications in the U.S. and other countries.
For SUSE trademarks, see Trademark and Service Mark list (http://www.suse.com/company/legal/). All third-party trademarks are the property of their respective owners.
Thanks for using SUSE Linux Enterprise Server in your business.
The SUSE Linux Enterprise Server Team.