Ubuntu Desktop does not respect VPN DNS servers https://askubuntu.com/questions/1566068/ubuntu-desktop-does-not-respect-vpn-dns-servers

On Ubuntu Desktop, in Settings > Network > VPN, I've configured my wireguard VPN tunnel, including desired IPv4 and IPv6 DNS servers.

The tunnel works, but the operating system continues to use the DNS server provided by the DHCP server of the main network interface, not the one configured in wg0. I can observe this in wireguard.

Because of this bug, I am unable to reach interally hosted services using their DNS names.

How do I force Ubuntu to respect VPN mandated DNS server? If I manually query my desired DNS server that lives on the other side of the VPN, it responds correctly so it's not a connectivity issue. It's a typical DNS leak.

I can replicate this behaviour on Ubuntu Desktop 25.10 and 26.04.

How to disable automatic display dimming in Budgie Ubuntu 26.04? https://askubuntu.com/questions/1566065/how-to-disable-automatic-display-dimming-in-budgie-ubuntu-26-04

I would like to have a GUI way of controlling automatic display dimming.

gsettings set org.gnome.settings-daemon.plugins.power idle-dim false 

seems to work in command line.

Konsole scrollbar color (again) https://askubuntu.com/questions/1566060/konsole-scrollbar-color-again

I just installed Kubuntu 26.04 and, upon switching Konsole color scheme to "black on light yellow" I found out that the scrollbar becomes a garish bright yellow. This rings a bell, I think it happened already around 20.04, but it was fixed and certainly doesn't look like that in 24.04

Good (24.04):

sample with grey scrollbar

Bad (26.04):

sample with yellow scrollbar

Any ideas how to fix it?

GNOME / Ubuntu 26.04 & 24.04 : Screen Capture Buffer Failure (Tiling & Blanking) on Intel 13th Gen (i915) https://askubuntu.com/questions/1566058/gnome-ubuntu-26-04-24-04-screen-capture-buffer-failure-tiling-blanking

I am seeking help for a graphical issue that has persisted across multiple Ubuntu versions (from 24.04 through to the current 26.04 development branch) on my ASUS ExpertBook P1503CVA.

The Problem: Regardless of the OS version or Kernel, I cannot get a clean screenshot.

On older versions, screenshots showed heavy diagonal tiling/tearing artifacts.

On the current build (Ubuntu 26.04, GNOME 50, Kernel 7.0), screenshots are consistently completely white/blank.

Hardware Specs:

CPU: Intel 13th Gen (Raptor Lake-P)

GPU: Intel Iris Xe (using i915 driver)

RAM: 64GB

Display: Wayland (mandatory in GNOME 50)

enter image description here

enter image description here

enter image description here

enter image description here

I can't use share my screen using Zoom https://askubuntu.com/questions/1566056/i-cant-use-share-my-screen-using-zoom

I'm using Ubuntu 25.10 I tried to fix it with several old solution without success.

These are the steps I tried: Screen Share Not working in Ubuntu 22.04 (In all platforms zoom, teams, google meet, anydesk , etc.,)

Not able to login to spotify(deb) in ubuntu 26.04 LTS https://askubuntu.com/questions/1566054/not-able-to-login-to-spotifydeb-in-ubuntu-26-04-lts

enter image description here

Not able to login to spotify(deb) in ubuntu 26.04 LTS

Proxy is off btw

Permission denied (public key) error i forgot to save the key when i created the Instance https://askubuntu.com/questions/1566052/permission-denied-public-key-error-i-forgot-to-save-the-key-when-i-created-the

Subject: Locked out of Oracle Cloud ARM Instance - Permission denied (publickey) - Help needed with Key Injection

The Problem: I am locked out of my Oracle Cloud "pay as i use" ARM Instance (Ubuntu, 24GB RAM). I do not have the original SSH private key coz i forgot to save the key when i created the Instance , and I am receiving the Permission denied (publickey) error when trying to connect via Cloud Shell or local terminal.

What I have tried:

  1. Cloud Shell: Generated a new RSA key pair in the Oracle Cloud Shell.

  2. Console Connection: Created a "Local Console Connection" and uploaded the new public key. However, when I launch the connection, I am prompted for a username/password. Since I never set a password for the ubuntu user, I cannot log in to manually add the key to ~/.ssh/authorized_keys.

  3. Edit Instance: I have tried to locate the "Cloud-init" script box under Compute -> Instances -> Instance Details -> Edit -> Advanced Options, but the "Management" tab does not seem to show the script injection box for this existing instance.

My Goal: I want to inject a new public key into the ubuntu user without terminating the instance, as I have important data and configurations on the boot volume.

Question: Is there a proven "easy way" to force a new SSH public key into an existing Oracle ARM instance when you are already locked out? Are there specific steps to trigger cloud-init to run again on a reboot for an existing instance, or is there a trick to the Serial Console that bypasses the password prompt for the ubuntu user?

System crash on Ubuntu 26.04 with TPM2 encryption, after installing the Nvidia 595 proprietary drivers https://askubuntu.com/questions/1566050/system-crash-on-ubuntu-26-04-with-tpm2-encryption-after-installing-the-nvidia-5

My system, which is configured with TPM2 encryption, crashes on reboot after installing the Nvidia 595 proprietary drivers. After that, the system is completely unusable and needs to be reinstalled.

Tutorial followed for this :

https://ubuntuhandbook.org/index.php/2026/04/nvidia-595-driver-ubuntu-26-04/

Note : I'm disappointed that the graphical installer for Nvidia's proprietary drivers has been removed from Ubuntu 26.04 ;-(

System crash on Ubuntu 26.04 with TPM2 encryption, followed by a package update to AMD64V3 https://askubuntu.com/questions/1566049/system-crash-on-ubuntu-26-04-with-tpm2-encryption-followed-by-a-package-update

After installing Ubuntu 26.04 with disk encryption via TPM2 on an internal SSD, if I update the packages to AMD64V3 versions, the system becomes corrupted and unrecoverable on the next boot ;-(

Tutorial followed for this :

https://discourse.ubuntu.com/t/introducing-architecture-variants-amd64v3-now-available-in-ubuntu-25-10/71312

Unable to install Ubuntu 26.04 with disk encryption by TPM2 on an USB 3 external SSD https://askubuntu.com/questions/1566048/unable-to-install-ubuntu-26-04-with-disk-encryption-by-tpm2-on-an-usb-3-external

While it worked with the beta version of Ubuntu 26.04, since the release of the final version, I can no longer install Ubuntu 26.04 on an external USB 3 SSD with disk encryption via TPM2.

This makes testing your distribution extremely inconvenient. I don't want to have to delete one of the operating systems from my internal hard drives just to test Ubuntu... without being sure that I'll like Ubuntu 26.04 and that it won't have any bugs on my laptop.

Regards

After Update from 24.04 to 26.04: how can I configure my terminal to open in the same directory as other open terminals https://askubuntu.com/questions/1566040/after-update-from-24-04-to-26-04-how-can-i-configure-my-terminal-to-open-in-the

After the latest operating system update I realized that my productivity went down as I was unable to configure my shell to open a new tab console in the same directory as the already opened tab.

How to reproduce:

  • Open a terminal (Ctrl Alt T)
  • cd Downloads
  • Open a new terminal (Ctrl Shift T)
  • New tab opens in ~ but in 24.04 the console opened in ~/Downloads

Any idea how I can get the old behaviour after my update?

The graphical UI has a setting "Preserve Working Directory" with available options: Always - Never - Safe, that does not change the behaviour of the terminal. New tabs open always open in the home directory.

My prompt command variable was used in 24.04 and remains unchanged:

 $ echo $PROMPT_COMMAND setLastCommandState;
echo -ne "\033]0;${USER}@${HOSTNAME}: ${PWD/$HOME/~}\007";setGitPrompt

Thanks

Can't login to ubuntu unity 26.04 live iso https://askubuntu.com/questions/1566039/cant-login-to-ubuntu-unity-26-04-live-iso

I am attempting to try out the new 26.04 version of ubuntu-unity. I have downloaded the iso ubuntu-unity-26.04-desktop-amd64.iso and put it on a Ventoy stick and am able to boot ok, to the point where it brings up a login prompt with a dropdown at the top that only shows 'Other' and a username and password field. I thought that user ubuntu with empty pwd should work but it says it is invalid. I have tried unity, ubuntu-unity and other combinations but nothing seems to work. What am I doing wrong please?

How to get wifi work in Ubuntu 26.04 persistent live session? https://askubuntu.com/questions/1566036/how-to-get-wifi-work-in-ubuntu-26-04-persistent-live-session

Wifi does not work in the live session, but I have wired networking available. However, if I want to enable wifi connection, is it a good idea to install a proprietary driver for the hardware. And is it a problem, if I boot the USB Ubuntu in another computer that uses a different driver when it has the additional driver?

How can I install the driver?

I tried running sudo apt update with the following output: enter image description here

And I tried to install software-properties-gtk. That one is needed in Ubuntu to install additional drivers, right? enter image description here

It is 8GB flash drive and the laptop has 8GB RAM. enter image description here

trying to configure samba to work with external usb drive https://askubuntu.com/questions/1565997/trying-to-configure-samba-to-work-with-external-usb-drive

I have a computer (Kubuntu 24.04) on my local network with 3 folders shared through samba. Two of them are on an internal hard drive and the other computers on the network are able to access them with no issues.

However, the 3rd one is on an external usb hard drive and I haven't been able to get the other computers on the network to mount this one. In /etc/fstab I have:

UUID=A8C0-A4CD    /media/user1/Rocstor    exfat rw,defaults,uid=1000,gid=988,fmask=0002,dmask=0002  0 0

to mount it when the computer starts.

In /etc/samba/smb.conf, I have:

[rocstor_folder]
comment = rocstor_ls_x
path = /media/user1/Rocstor/rocstor_shared
valid users = sambauser, @sambashare
writeable = yes
browsable = yes
read only = no
force group = sambashare    

and the folder permissions:

$ stat /media/user1/Rocstor/rocstor_shared
  File: /media/user1/Rocstor/rocstor_shared
  Size: 131072          Blocks: 256        IO Block: 131072 directory
Device: 8,66    Inode: 2           Links: 6
Access: (0775/drwxrwxr-x)  Uid: ( 1000/    user1)   Gid: (  988/sambashare)

smbclient (ls1 and ls2 are the ones that work):

$ smbclient -L 10.0.0.9
Password for [WORKGROUP\user1]:

    Sharename       Type      Comment
    ---------       ----      -------
    print$          Disk      Printer Drivers
    ls1             Disk      
    ls2             Disk      
    ls_www          Disk      
    rocstor_folder  Disk      rocstor_ls_x
    IPC$            IPC       IPC Service (linuxserver server (Samba, Ubuntu))
SMB1 disabled -- no workgroup available

/etc/fstab from one of the other computers on the network to try to connect to the share:

//10.0.0.9/rocstor_shared   /mnt/rocstor_shared  cifs  rw,credentials=/home/user2/.secrets/smb_cred   0   0

When I try to mount this from the other computers it says the share isn't found. I haven't been able to figure out why they can't see it.

PCR_UNUSABLE When installing Ubuntu 26.04 with Hardware-backed encryption https://askubuntu.com/questions/1565961/pcr-unusable-when-installing-ubuntu-26-04-with-hardware-backed-encryption

I cannot install the new Ubuntu with Hardware-backed encryption because of this error:

PCR_UNUSABLE
error with secure boot policy (PCR7) measurements: unexpected EV_EFI_BOOT_SERVICES_APPLICATION event for \EFI\BOOT\BOOTEX.EXE after already seeing a verification during the OS-present environment. This event should be for the initial boot loader.

I have reset the TPM and reset the secure boot keys, I have cleared the drive so it is completely empty.

I am using a Lenovo ThinkPad T14 Gen 1 (Intel Edition), I have disabled the Intel features like AMT.

Is there anything I can turn off to fix this issue? Anything will help.

Ubuntu 26.04 installation stuck during rsync https://askubuntu.com/questions/1565956/ubuntu-26-04-installation-stuck-during-rsync

Starting an installation of ubuntu 26.04 with full disk wipe using graphic installer. Have an installation stuck at rsync command for 30+ minutes after an error.

rsync connection closed

Any way to proceed?

Has anyone had any success with TPM encryption? https://askubuntu.com/questions/1562222/has-anyone-had-any-success-with-tpm-encryption

I got a new Framework laptop and expected the TPM to be able to trigger the full disk encryption, however on installing Ubuntu 25.10 I could not select the TPM option. Does anyone know of a good guide to this? All my searches have come up short so far.

NVIDIA GeForce RTX 5060 Ti drivers https://askubuntu.com/questions/1562085/nvidia-geforce-rtx-5060-ti-drivers

I have NVIDIA GeForce RTX 5060 Ti + Ryzen 5 5600X, and I have the following problems installing Kubuntu on that computer:

  • I tried installing Kubuntu 24.10 and 25.10 and I was not even able to run it from the live CD (flash).

  • I even tried Ubuntu server to install it. I had the same problem, just a blank screen. The Fans were spinning at maximum for 25.10.

  • I consulted AI and it helped me run it without the GPU drivers. I was able to run it and install it, but the moment I tried to install the Ubuntu server 25.10 I ticked the option to install the third-party Nvidia 580 driver automatically. I am not able to run the system again. I get just a blank screen.

  • AI helped me again to run it without the GPU with the nomodeset option. It sort of runs but fills the screen with hundreds of "probe with nvidia failed" errors. And of of course, the system cannot be used because I cannot get into a command line at all.

Is there help for me or should I just not use Ubuntu?

How to disable HDMI output sinks in Pipewire? https://askubuntu.com/questions/1557133/how-to-disable-hdmi-output-sinks-in-pipewire

I want to do that for two reasons:

Firstly, in the workplace setting I never need or indeed tolerate HDMI output to broadcast my sounds at large. Secondly, there seems to be a bug in Ubuntu where when I plug in headphones, the system correctly re-routes sound from HDMI to laptop, but towards speaker not headphone - still broadcasting at large; I hope disabling HDMI will get rid of that. So I need to disable specific sinks instead, because of integrated chipset.

It looks like not only Pipewire has new bugs Pulseaudio didn't, but it also has no GUI or even command-line tuning (wireplumber exists but has laughable cmdline and no GUI), instead user is expected to write json files to their dot dirs to configure. I would prefer a console or GUI tool, but JSON writing recipe is not off limits.

I've reviewed existing popular answers:

  • Disabling port switch looks wrong because I still want switching between headphones and speakers, just never HDMI;
  • Blacklisting snd_hda_codec_hdmi does not work - upon reboot the module is still loaded and HDMI sinks present; even if it worked, I believe it would disable the entirety of Meteor Lake controller, including speaker/headphone.
  • I am using Meteor Lake super integrated chipset, meaning there's not two separate audio controllers but a single one - so there is no option to disable separate HDMI controller entirely: pavucontrol controllers
Ubuntu 24.04 (Noble Numbat) hangs on installation https://askubuntu.com/questions/1511927/ubuntu-24-04-noble-numbat-hangs-on-installation

I've been trying to install the new Ubuntu 24.04 (Nomble Numbat) on a PC that's been running 22.04 for a while. Observations:

  1. Upon boot from the USB stick it errors out quickly (IIRC "Oh no something went wrong!" but I don't remember exactly).

  2. It seems the new installer is unhappy with something about my disk partitioning (tri-boot: Windows, Debian, and Ubuntu, with the later two sharing LUKS and LVM volumes). If I unlock the LUKS partitions and restart the installer, I get the first screen to select my language, click "Next", and it just spins.

  3. After a bunch of trial-and-error I discovered if I boot a fresh installer and quickly kill it (presumably before it caches some data that contributes to the error), then unlock the LUKS partitions, the installer makes it further but still ultimately hangs. By poking around I found subiquity-server-info.log, which contains:

2024-04-27 00:59:23,292 INFO subiquity:201 Starting Subiquity server revision 171 of snap /snap/ubuntu-desktop-bootstrap/171 of version 0+git.2d119e1b3
2024-04-27 00:59:23,292 INFO subiquity:205 Arguments passed: ['/snap/ubuntu-desktop-bootstrap/171/bin/subiquity/subiquity/cmd/server.py', '--use-os-prober', '--storage-version=2', '--postinst-hooks-dir=/snap/ubuntu-desktop-bootstrap/171/etc/subiquity/postinst.d']
2024-04-27 00:59:25,292 INFO root:38 start: subiquity/apply_autoinstall_config: 
2024-04-27 00:59:25,295 INFO root:38 finish: subiquity/apply_autoinstall_config: SUCCESS: 
2024-04-27 00:59:25,559 ERROR root:38 finish: subiquity/Refresh/check_for_update: FAIL: cancelled
2024-04-27 00:59:25,598 INFO root:38 start: subiquity/Meta/status_GET: 
2024-04-27 00:59:25,599 INFO root:38 finish: subiquity/Meta/status_GET: SUCCESS: 200 {"state": "WAITING", "confirming_tty": "", "error": null, "nonreportable_erro...
2024-04-27 00:59:25,602 INFO root:38 start: subiquity/Meta/client_variant_POST: 
2024-04-27 00:59:25,602 INFO root:38 finish: subiquity/Meta/client_variant_POST: SUCCESS: 200 null
2024-04-27 00:59:25,603 INFO root:38 start: subiquity/Meta/status_GET: 
2024-04-27 00:59:25,603 INFO root:38 finish: subiquity/Meta/status_GET: SUCCESS: 200 {"state": "WAITING", "confirming_tty": "", "error": null, "nonreportable_erro...
2024-04-27 00:59:25,604 INFO root:38 start: subiquity/Meta/mark_configured_POST: 
2024-04-27 00:59:25,605 INFO root:38 finish: subiquity/Meta/mark_configured_POST: SUCCESS: 200 null
2024-04-27 00:59:25,636 ERROR probert.multipath:38 Failed to run cmd: ['multipathd', 'show', 'maps', 'raw', 'format', '%w,%d,%N']
2024-04-27 00:59:25,636 ERROR probert.multipath:38 Failed to run cmd: ['multipathd', 'show', 'paths', 'raw', 'format', '%d,%z,%m,%N,%n,%R,%r,%a']
2024-04-27 00:59:26,190 INFO probert.lvm:120 b'  9 logical volume(s) in volume group "ubuntu-vg" now active\n'
2024-04-27 00:59:26,266 ERROR probert.lvm:225 Found duplicate volume group: ubuntu-vg
2024-04-27 00:59:26,266 ERROR probert.lvm:225 Found duplicate volume group: ubuntu-vg
2024-04-27 00:59:26,266 ERROR probert.lvm:225 Found duplicate volume group: ubuntu-vg
2024-04-27 00:59:26,267 ERROR probert.lvm:225 Found duplicate volume group: ubuntu-vg
2024-04-27 00:59:26,267 ERROR probert.lvm:225 Found duplicate volume group: ubuntu-vg
2024-04-27 00:59:26,267 ERROR probert.lvm:225 Found duplicate volume group: ubuntu-vg
2024-04-27 00:59:26,268 ERROR probert.lvm:225 Found duplicate volume group: ubuntu-vg
2024-04-27 00:59:26,268 ERROR probert.lvm:225 Found duplicate volume group: ubuntu-vg
2024-04-27 00:59:26,364 INFO root:38 start: subiquity/Meta/status_GET: 
2024-04-27 00:59:26,364 INFO root:38 finish: subiquity/Meta/status_GET: SUCCESS: 200 {"state": "WAITING", "confirming_tty": "", "error": null, "nonreportable_erro...
2024-04-27 00:59:26,365 INFO root:38 start: subiquity/Meta/status_GET: 
2024-04-27 00:59:26,365 INFO root:38 finish: subiquity/Meta/status_GET: SUCCESS: 200 {"state": "WAITING", "confirming_tty": "", "error": null, "nonreportable_erro...
2024-04-27 00:59:26,366 INFO root:38 start: subiquity/Meta/status_GET: 
2024-04-27 00:59:26,366 INFO root:38 start: subiquity/Meta/interactive_sections_GET: 
2024-04-27 00:59:26,366 INFO root:38 finish: subiquity/Meta/interactive_sections_GET: SUCCESS: 200 null

Any suggestions about how to debug this further and/or work around? I was going to install it over an existing volume, anyway (e.g., wipe my Debian volume), so if I could, for example, run the installer from my 22.04 volume that would at least speed my debugging cycles and make it easier to get logs and screenshots from the installer into bug reports.

Anydesk error: Aborted (core dumped) in Ubuntu 22.04 https://askubuntu.com/questions/1407748/anydesk-error-aborted-core-dumped-in-ubuntu-22-04

I have recently installed Ubuntu 22.04 on my computer.

Initially I installed libgtkglext1 using following command.

sudo apt-get install libgtkglext1

After that I installed libpangox-1.0-0 using following command.

wget http://ftp.us.debian.org/debian/pool/main/p/pangox-compat/libpangox-1.0-0_0.0.2-5.1_amd64.deb

sudo apt install ./libpangox-1.0-0_0.0.2-5.1_amd64.deb

Now when I run ./anydesk on my terminal I get the following error:

Aborted (core dumped)

How to fix it? Please help.

Failed to start Apply Kernel Variables - Cannot connect to ethernet with sudo dhclient enp2s0 https://askubuntu.com/questions/1269251/failed-to-start-apply-kernel-variables-cannot-connect-to-ethernet-with-sudo-dh

My system dual boots Ubuntu 18.04 and Windows 10. Installed both last year on a new Dell laptop. No problems since then. Wasn't trying to upgrade the distribution as in other similar questions.

I booted Ubuntu 18.04 this week and received "Failed to start Apply Kernel Variables". Then was dropped to the command prompt and logged in as root. Ran

system-ctl status systemd-sysctl.service

(Three failure results listed at the end of this post - I don't know how to attach the full file. First two suggest a network problem. Last one suggests a file system error)

Started following the instructions from a post where this seemed to have been answered

All went fine until I tried to start my ethernet (wired connnection)

sudo dhclient enp2s0
cmp: EOF on /tmp/tmp.gxb88KcVpV which is empty

I can't go any further. Please guide me here - I only have a basic knowledge of Ubuntu. Thanks!

The log file extracts below were produced while I was on a wireless connection. The error on "dhclient" came up when I was on wired connection

1. 
Aug 19 09:39:39 Inspiron-7472 systemd-udevd[434]: Process '/lib/systemd/systemd-sysctl --prefix=/net/ipv4/conf/enp2s0 --prefix=/net/ipv4/neigh/enp2s0 --prefix=/net/ipv6/conf/enp2s0 --prefix=/net/ipv6/neigh/enp2s0' failed with exit code 1.
Aug 19 09:39:39 Inspiron-7472 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Aug 19 09:39:39 Inspiron-7472 kernel: cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'

2.
Aug 19 09:39:40 Inspiron-7472 systemd-udevd[438]: Process '/lib/systemd/systemd-sysctl --prefix=/net/ipv4/conf/wlp3s0 --prefix=/net/ipv4/neigh/wlp3s0 --prefix=/net/ipv6/conf/wlp3s0 --prefix=/net/ipv6/neigh/wlp3s0' failed with exit code 1

3.
Unit media-windata.mount has finished starting up.
-- 
-- The start-up result is RESULT.
Aug 19 09:39:44 Inspiron-7472 systemd-fsck[933]: /dev/sda3: Inode 6292001 seems to contain garbage.
Aug 19 09:39:44 Inspiron-7472 systemd-fsck[933]: /dev/sda3: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
Aug 19 09:39:44 Inspiron-7472 systemd-fsck[933]:         (i.e., without -a or -p options)
Aug 19 09:39:45 Inspiron-7472 systemd-fsck[933]: fsck failed with exit status 4.
Aug 19 09:39:45 Inspiron-7472 systemd-fsck[933]: Running request emergency.target/start/replace
Aug 19 09:39:45 Inspiron-7472 systemd[1]: systemd-fsck@dev-disk-by\x2duuid-432d3483\x2d14c5\x2d461d\x2db21c\x2d965d0bedfeef.service: Main process exited, code=exited, status=1/FAILURE
Aug 19 09:39:45 Inspiron-7472 systemd[1]: systemd-fsck@dev-disk-by\x2duuid-432d3483\x2d14c5\x2d461d\x2db21c\x2d965d0bedfeef.service: Failed with result 'exit-code'.
Aug 19 09:39:45 Inspiron-7472 systemd[1]: Failed to start File System Check on /dev/disk/by-uuid/432d3483-14c5-461d-b21c-965d0bedfeef.
-- Subject: Unit systemd-fsck@dev-disk-by\x2duuid-432d3483\x2d14c5\x2d461d\x2db21c\x2d965d0bedfeef.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
-- 
-- Unit systemd-fsck@dev-disk-by\x2duuid-432d3483\x2d14c5\x2d461d\x2db21c\x2d965d0bedfeef.service has failed.
-- 
-- The result is RESULT.
Aug 19 09:39:45 Inspiron-7472 systemd[1]: Dependency failed for /home.
-- Subject: Unit home.mount has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
-- 
-- Unit home.mount has failed.
-- 
-- The result is RESULT.
Aug 19 09:39:45 Inspiron-7472 systemd[1]: Dependency failed for Local File Systems.
-- Subject: Unit local-fs.target has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support

Emergency mode screen and fsck doesn't work https://askubuntu.com/questions/1257102/emergency-mode-screen-and-fsck-doesnt-work

I'm stuck at the "You are in emergency mode screen" (Ubuntu 20) while trying to log in into my machine.

[    1.075092] xhci_hcd 0000:02:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain
=0x0011 address=0xce943880 flags=0x0000]
[    1.075099] xhci_hcd 0000:02:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain
=0x0011 address=0xce943880 flags=0x0000]
[    1.075105] xhci_hcd 0000:02:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain
=0x0011 address=0xce943880 flags=0x0000]
[    1.075112] AMD-Vi: Event logged [IO_PAGE_FAULT device=02:00.0 domain=0x0011
address=0xce943880 flags=0x0000]
[    1.075118] AMD-Vi: Event logged [IO_PAGE_FAULT device=02:00.0 domain=0x0011
address=0xce943880 flags=0x0000]
[    1.075124] AMD-Vi: Event logged [IO_PAGE_FAULT device=02:00.0 domain=0x0011
address=0xce943880 flags=0x0000]
[    1.075131] AMD-Vi: Event logged [IO_PAGE_FAULT device=02:00.0 domain=0x0011
address=0xce943880 flags=0x0000]
[    1.075139] AMD-Vi: Event logged [IO_PAGE_FAULT device=02:00.0 domain=0x0011
address=0xce943880 flags=0x0000]
[    1.075146] AMD-Vi: Event logged [IO_PAGE_FAULT device=02:00.0 domain=0x0011
address=0xce943880 flags=0x0000]
[    1.075153] AMD-Vi: Event logged [IO_PAGE_FAULT device=02:00.0 domain=0x0011
address=0xce943880 flags=0x0000]
[    1.075159] AMD-Vi: Event logged [IO_PAGE_FAULT device=02:00.0 domain=0x0011
address=0xce943880 flags=0x0000]
[    1.075166] AMD-Vi: Event logged [IO_PAGE_FAULT device=02:00.0 domain=0x0011
address=0xce943880 flags=0x0000]
[   11.075015] xhci_hcd 0000:02:00.0: can't setup: -110
[   11.075114] xhci_hcd 0000:02:00.0: init 0000:02:00.0 fail, -110
/dev/sdb7: clean, 273949/2501856 files, 3681596/10000128 blocks
You are in emergency mode. After logging in, type "journalctl -xb" to view
system logs, "systemctl reboot" to reboot, "systemctl default" or "exit"
to boot into default mode.
Press Enter for maintenance
(or press Control-D to continue): _

I booted from a live USB and run the command fsck, but it doesn't seem to work. First I listed my drives.

Disk /dev/loop5: 49.8 MiB, 52203520 bytes, 101960 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sdb: 223.58 GiB, 240057409536 bytes, 468862128 sectors
Disk model: KINGSTON SA400S3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 69C63C93-3282-45B1-98B6-CDC4B7118752

Device          Start       End   Sectors   Size Type
/dev/sdb1        2048   1023999   1021952   499M Windows recovery environment
/dev/sdb2     1024000   1226751    202752    99M EFI System
/dev/sdb3     1226752   1259519     32768    16M Microsoft reserved
/dev/sdb4     1259520 234881023 233621504 111.4G Microsoft basic data
/dev/sdb5   234881024 236881919   2000896   977M Linux filesystem
/dev/sdb6   236881920 252882943  16001024   7.6G Linux swap
/dev/sdb7   252882944 332883967  80001024  38.2G Linux filesystem
/dev/sdb8   332883968 468860927 135976960  64.9G Linux filesystem

Disk /dev/sda: 931.53 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD1003FZEX-0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x499f4b81

Device     Boot      Start         End    Sectors   Size Id Type
/dev/sda1  *          2048   524290047  524288000   250G  7 HPFS/NTFS/exFAT
/dev/sda2        524290048  1153435647  629145600   300G  7 HPFS/NTFS/exFAT
/dev/sda3       1184892928  1953523711  768630784 366.5G  5 Extended
/dev/sda4       1153435648  1184892927   31457280    15G  7 HPFS/NTFS/exFAT
/dev/sda5       1468010496  1635782655  167772160    80G  7 HPFS/NTFS/exFAT
/dev/sda6       1635784704  1953523711  317739008 151.5G  7 HPFS/NTFS/exFAT
/dev/sda7       1184894976  1468010495  283115520   135G  7 HPFS/NTFS/exFAT

Partition table entries are not in disk order.

And then ran fsck. I don't know if it is supposed to perform like this, but hardly anything showed up in the console

ubuntu@ubuntu:~$ sudo fsck -f /dev/sdb7
fsck from util-linux 2.34
e2fsck 1.45.5 (07-Jan-2020)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/sdb7: 273949/2501856 files (0.2% non-contiguous), 3681596/10000128 blocks
ubuntu@ubuntu:~$ █

I also don't understand why sda1 shows as a boot partition in the drive list, since it's just a storage partition. The computer actually boots from sdb.

I ran into this problem when I hard reset my machine as it was freezing during boot up.

Super key not working in Ubuntu 20.04 https://askubuntu.com/questions/1231863/super-key-not-working-in-ubuntu-20-04

I just updated to Ubuntu 20.04 LTS and my super key has lost its main functionality. Previously in Ubuntu 18.04LTS, I was able to click the super key and the applications drawer would show. Now it doesn't do that. I can still open it by clicking super+a but it's annoying that it doesn't do that.

I restarted my machine and still nothing.

Cannot fsck a disk due to no r/w access https://askubuntu.com/questions/938943/cannot-fsck-a-disk-due-to-no-r-w-access

I am trying to fsck an external drive (sdb)

caine@caine-VirtualBox:~$ lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPO
sdb      8:16   0   5.5T  0 disk 
├─sdb2   8:18   0     2G  0 part 
├─sdb5   8:21   0   5.5T  0 part 
└─sdb1   8:17   0   2.4G  0 part 
sr0     11:0    1 1024M   0 rom  
sda      8:0    0    10G  0 disk 
├─sda2   8:2    0     1K  0 part 
├─sda5   8:5    0  1022M  0 part [SWAP]
└─sda1   8:1    0     9G  0 part /

...however it says I do not have r/w access to the drive.

caine@caine-VirtualBox:~$ fsck /dev/sdb5
fsck from util-linux 2.27.1
e2fsck 1.42.13 (17-May-2015)
fsck.ext2: Permission denied while trying to open /dev/sdb5
You must have r/w access to the filesystem or be root

What should I do?

Custom launcher icon opens a second generic icon https://askubuntu.com/questions/904886/custom-launcher-icon-opens-a-second-generic-icon

I wrote a script in python to make my volume louder using pactl. I made a .desktop file

[Desktop Entry]
Type=Application
Terminal=false
Name=Super Volume
Icon=/home/tyler/SuperVolume/icon.ico
Exec=/home/tyler/SuperVolume/SuperVolume.py

All was well

Launcher item with icon but then i noticed that it was not adding the white arrow to my icon but opening a generic icon and putting the arrow on that

The dreaded generic icon

So if anyone could tell me how to change this I would really appreciate it I googled a lot before asking here but maybe wasn't using the right keywords, not sure.

http://localhost:8080/ vs http://server_IP_address:8080/ https://askubuntu.com/questions/718336/http-localhost8080-vs-http-server-ip-address8080

after installing tomcat7 on ubuntu.

what to do if http://localhost:8080/ works fine but http://server_IP_address:8080/ does not. what is the difference between these two?

How to view history of apt-get install? https://askubuntu.com/questions/680410/how-to-view-history-of-apt-get-install

How can I view the history of apt-get install commands that I have manually executed?

It seems to me that all methods available show everything that has been installed right from the start of the Ubuntu installation.

How can I view the history of apt-get install since the time my system-installation process had completed?

Unable to execute command start-all.sh in Hadoop https://askubuntu.com/questions/438584/unable-to-execute-command-start-all-sh-in-hadoop

GUYS I'm using this tutorial How to install Hadoop? I mean the one made by Luis Alvarado in one of the comments .. So I'm on Ubuntu 13.10 64bit Hadoop version is 2.2.0

Actually I'm a total newbie on Hadoop .. Its new for me and We guys are trying to work on some Big Data related project I count you guys.. Help me! I know tutorial is based on earlier versions of Hadoop but I managed to make it through the 11th step! And the output of the step is

root@sandesh-Inspiron-1564:/home/hduser/hadoop# sudo ./bin/hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

14/03/24 20:29:54 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = sandesh-Inspiron-1564/127.0.1.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.2.0
STARTUP_MSG:   classpath = /home/hduser/hadoop/etc/hadoop:/home/hduser/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/hduser/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/home/hduser/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-io-2.1.jar:/home/hduser/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/hduser/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/home/hduser/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/hduser/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/home/hduser/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/home/hduser/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/home/hduser/hadoop/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/home/hduser/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/home/hduser/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-logging-1.1.1.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/hduser/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/hduser/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/hduser/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/home/hduser/hadoop/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/home/hduser/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/home/hduser/hadoop/share/hadoop/common/lib/jets3t-0.6.1.jar:/home/hduser/hadoop/share/hadoop/common/lib/stax-api-1.0.1.jar:/home/hduser/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/home/hduser/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/hduser/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/hduser/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/home/hduser/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/home/hduser/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/home/hduser/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-lang-2.5.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/hduser/hadoop/share/hadoop/common/lib/asm-3.2.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-math-2.1.jar:/home/hduser/hadoop/share/hadoop/common/lib/xz-1.0.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/hduser/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/home/hduser/hadoop/share/hadoop/common/lib/activation-1.1.jar:/home/hduser/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/hduser/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/home/hduser/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/home/hduser/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/hduser/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/home/hduser/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/home/hduser/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/home/hduser/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/home/hduser/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/hduser/hadoop/share/hadoop/common/hadoop-nfs-2.2.0.jar:/home/hduser/hadoop/share/hadoop/common/hadoop-common-2.2.0.jar:/home/hduser/hadoop/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/home/hduser/hadoop/share/hadoop/hdfs:/home/hduser/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/commons-io-2.1.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/hduser/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/hduser/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/home/hduser/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/home/hduser/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/commons-io-2.1.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/paranamer-2.3.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/junit-4.10.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/avro-1.7.4.jar:/home/hduser/hadoop/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/home/hduser/hadoop/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/home/hduser/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/home/hduser/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/home/hduser/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/home/hduser/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/home/hduser/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/home/hduser/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/home/hduser/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/home/hduser/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/home/hduser/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/home/hduser/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/hduser/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/hduser/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/home/hduser/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/home/hduser/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/home/hduser/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/home/hduser/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/home/hduser/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/home/hduser/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/home/hduser/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/home/hduser/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
STARTUP_MSG:   java = 1.7.0_51
************************************************************/
14/03/24 20:29:54 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /home/hduser/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c ', or link it with '-z noexecstack'.
14/03/24 20:29:54 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-f3d89333-8217-48ce-9281-44b0caed76f9
14/03/24 20:29:55 INFO namenode.HostFileManager: read includes:
HostSet(
)
14/03/24 20:29:55 INFO namenode.HostFileManager: read excludes:
HostSet(
)
14/03/24 20:29:55 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/03/24 20:29:55 INFO util.GSet: Computing capacity for map BlocksMap
14/03/24 20:29:55 INFO util.GSet: VM type       = 64-bit
14/03/24 20:29:55 INFO util.GSet: 2.0% max memory = 889 MB
14/03/24 20:29:55 INFO util.GSet: capacity      = 2^21 = 2097152 entries
14/03/24 20:29:55 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/03/24 20:29:55 INFO blockmanagement.BlockManager: defaultReplication         = 1
14/03/24 20:29:55 INFO blockmanagement.BlockManager: maxReplication             = 512
14/03/24 20:29:55 INFO blockmanagement.BlockManager: minReplication             = 1
14/03/24 20:29:55 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
14/03/24 20:29:55 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
14/03/24 20:29:55 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/03/24 20:29:55 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
14/03/24 20:29:55 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
14/03/24 20:29:55 INFO namenode.FSNamesystem: supergroup          = supergroup
14/03/24 20:29:55 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/03/24 20:29:55 INFO namenode.FSNamesystem: HA Enabled: false
14/03/24 20:29:55 INFO namenode.FSNamesystem: Append Enabled: true
14/03/24 20:29:55 INFO util.GSet: Computing capacity for map INodeMap
14/03/24 20:29:55 INFO util.GSet: VM type       = 64-bit
14/03/24 20:29:55 INFO util.GSet: 1.0% max memory = 889 MB
14/03/24 20:29:55 INFO util.GSet: capacity      = 2^20 = 1048576 entries
14/03/24 20:29:55 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/03/24 20:29:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/03/24 20:29:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/03/24 20:29:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
14/03/24 20:29:55 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
14/03/24 20:29:55 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14/03/24 20:29:55 INFO util.GSet: Computing capacity for map Namenode Retry Cache
14/03/24 20:29:55 INFO util.GSet: VM type       = 64-bit
14/03/24 20:29:55 INFO util.GSet: 0.029999999329447746% max memory = 889 MB
14/03/24 20:29:55 INFO util.GSet: capacity      = 2^15 = 32768 entries
Re-format filesystem in Storage Directory /app/hadoop/tmp/dfs/name ? (Y or N) y
14/03/24 20:30:01 INFO common.Storage: Storage directory /app/hadoop/tmp/dfs/name has been successfully formatted.
14/03/24 20:30:01 INFO namenode.FSImage: Saving image file /app/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
14/03/24 20:30:01 INFO namenode.FSImage: Image file /app/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 196 bytes saved in 0 seconds.
14/03/24 20:30:01 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
14/03/24 20:30:01 INFO util.ExitUtil: Exiting with status 0
14/03/24 20:30:01 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at sandesh-Inspiron-1564/127.0.1.1
************************************************************/

Now this is the error when I'm executing the command ./start-all.sh

hduser@sandesh-Inspiron-1564:~/hadoop$ ./sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
14/03/24 20:34:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /home/hduser/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c ', or link it with '-z noexecstack'.
localhost]
sed: -e expression #1, char 6: unknown option to `s'
Java: ssh: Could not resolve hostname Java: Name or service not known
VM: ssh: Could not resolve hostname VM: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
loaded: ssh: Could not resolve hostname loaded: Name or service not known
-c: Unknown cipher type 'cd'
The: ssh: Could not resolve hostname The: Name or service not known
might: ssh: Could not resolve hostname might: Name or service not known
stack: ssh: Could not resolve hostname stack: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
guard: ssh: Could not resolve hostname guard: Name or service not known
stack: ssh: Could not resolve hostname stack: Name or service not known
library: ssh: Could not resolve hostname library: Name or service not known
HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Name or service not known
disabled: ssh: Could not resolve hostname disabled: Name or service not known
You: ssh: Could not resolve hostname You: Name or service not known
Server: ssh: Could not resolve hostname Server: Name or service not known
highly: ssh: Could not resolve hostname highly: Name or service not known
that: ssh: Could not resolve hostname that: Name or service not known
which: ssh: Could not resolve hostname which: Name or service not known
will: ssh: Could not resolve hostname will: Name or service not known
now.: ssh: Could not resolve hostname now.: Name or service not known
'-z: ssh: Could not resolve hostname '-z: Name or service not known
link: ssh: Could not resolve hostname link: Name or service not known
library: ssh: Could not resolve hostname library: Name or service not known
with: ssh: Could not resolve hostname with: Name or service not known
the: ssh: Could not resolve hostname the: Name or service not known
guard.: ssh: Could not resolve hostname guard.: Name or service not known
fix: ssh: Could not resolve hostname fix: Name or service not known
the: ssh: Could not resolve hostname the: Name or service not known
with: ssh: Could not resolve hostname with: Name or service not known
it: ssh: Could not resolve hostname it: Name or service not known
64-Bit: ssh: Could not resolve hostname 64-Bit: Name or service not known
try: ssh: Could not resolve hostname try: Name or service not known
',: ssh: Could not resolve hostname ',: Name or service not known
localhost: starting namenode, logging to /home/hduser/hadoop/logs/hadoop-hduser-namenode-sandesh-Inspiron-1564.out
to: ssh: connect to host to port 22: Connection refused
warning:: ssh: Could not resolve hostname warning:: Name or service not known
you: ssh: Could not resolve hostname you: Name or service not known
fix: ssh: Could not resolve hostname fix: Name or service not known
It's: ssh: Could not resolve hostname It's: Name or service not known
or: ssh: Could not resolve hostname or: Name or service not known
recommended: ssh: Could not resolve hostname recommended: Name or service not known
VM: ssh: Could not resolve hostname VM: Name or service not known
noexecstack'.: ssh: Could not resolve hostname noexecstack'.: Name or service not known
'execstack: ssh: Could not resolve hostname 'execstack: Name or service not known
localhost: starting datanode, logging to /home/hduser/hadoop/logs/hadoop-hduser-datanode-sandesh-Inspiron-1564.out
Starting secondary namenodes [Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /home/hduser/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c ', or link it with '-z noexecstack'.
0.0.0.0]
sed: -e expression #1, char 6: unknown option to `s'
-c: Unknown cipher type 'cd'
Java: ssh: Could not resolve hostname Java: Name or service not known
stack: ssh: Could not resolve hostname stack: Name or service not known
Server: ssh: Could not resolve hostname Server: Name or service not known
You: ssh: Could not resolve hostname You: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
VM: ssh: Could not resolve hostname VM: Name or service not known
fix: ssh: Could not resolve hostname fix: Name or service not known
HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Name or service not known
warning:: ssh: Could not resolve hostname warning:: Name or service not known
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is 89:fb:3d:98:2c:6d:03:c1:a3:de:96:3b:39:bc:ca:b3.
Are you sure you want to continue connecting (yes/no)? loaded: ssh: Could not resolve hostname loaded: Name or service not known
stack: ssh: Could not resolve hostname stack: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
guard: ssh: Could not resolve hostname guard: Name or service not known
library: ssh: Could not resolve hostname library: Name or service not known
The: ssh: Could not resolve hostname The: Name or service not known
disabled: ssh: Could not resolve hostname disabled: Name or service not known
VM: ssh: Could not resolve hostname VM: Name or service not known
64-Bit: ssh: Could not resolve hostname 64-Bit: Name or service not known
try: ssh: Could not resolve hostname try: Name or service not known
you: ssh: Could not resolve hostname you: Name or service not known
might: ssh: Could not resolve hostname might: Name or service not known
the: ssh: Could not resolve hostname the: Name or service not known
highly: ssh: Could not resolve hostname highly: Name or service not known
It's: ssh: Could not resolve hostname It's: Name or service not known
now.: ssh: Could not resolve hostname now.: Name or service not known
that: ssh: Could not resolve hostname that: Name or service not known
which: ssh: Could not resolve hostname which: Name or service not known
guard.: ssh: Could not resolve hostname guard.: Name or service not known
link: ssh: Could not resolve hostname link: Name or service not known
library: ssh: Could not resolve hostname library: Name or service not known
fix: ssh: Could not resolve hostname fix: Name or service not known
the: ssh: Could not resolve hostname the: Name or service not known
or: ssh: Could not resolve hostname or: Name or service not known
with: ssh: Could not resolve hostname with: Name or service not known
with: ssh: Could not resolve hostname with: Name or service not known
recommended: ssh: Could not resolve hostname recommended: Name or service not known
'-z: ssh: Could not resolve hostname '-z: Name or service not known
it: ssh: Could not resolve hostname it: Name or service not known
will: ssh: Could not resolve hostname will: Name or service not known
',: ssh: Could not resolve hostname ',: Name or service not known
'execstack: ssh: Could not resolve hostname 'execstack: Name or service not known
noexecstack'.: ssh: Could not resolve hostname noexecstack'.: Name or service not known
to: ssh: connect to host to port 22: Connection refused

OUTPUT FOR ./start-dfs.sh

hduser@sandesh-Inspiron-1564:~/hadoop$ ./sbin/start-dfs.sh
14/03/24 20:57:53 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /home/hduser/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c ', or link it with '-z noexecstack'.
localhost]
sed: -e expression #1, char 6: unknown option to `s'
-c: Unknown cipher type 'cd'
'execstack: ssh: Could not resolve hostname 'execstack: Name or service not known
HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Name or service not known
now.: ssh: Could not resolve hostname now.: Name or service not known
or: ssh: Could not resolve hostname or: Name or service not known
might: ssh: Could not resolve hostname might: Name or service not known
that: ssh: Could not resolve hostname that: Name or service not known
library: ssh: Could not resolve hostname library: Name or service not known
the: ssh: Could not resolve hostname the: Name or service not known
recommended: ssh: Could not resolve hostname recommended: Name or service not known
the: ssh: Could not resolve hostname the: Name or service not known
it: ssh: Could not resolve hostname it: Name or service not known
with: ssh: Could not resolve hostname with: Name or service not known
',: ssh: Could not resolve hostname ',: Name or service not known
link: ssh: Could not resolve hostname link: Name or service not known
you: ssh: Could not resolve hostname you: Name or service not known
disabled: ssh: Could not resolve hostname disabled: Name or service not known
The: ssh: Could not resolve hostname The: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
You: ssh: Could not resolve hostname You: Name or service not known
guard: ssh: Could not resolve hostname guard: Name or service not known
guard.: ssh: Could not resolve hostname guard.: Name or service not known
library: ssh: Could not resolve hostname library: Name or service not known
will: ssh: Could not resolve hostname will: Name or service not known
warning:: ssh: Could not resolve hostname warning:: Name or service not known
fix: ssh: Could not resolve hostname fix: Name or service not known
VM: ssh: Could not resolve hostname VM: Name or service not known
localhost: starting namenode, logging to /home/hduser/hadoop/logs/hadoop-hduser-namenode-sandesh-Inspiron-1564.out
to: ssh: connect to host to port 22: Connection refused
loaded: ssh: Could not resolve hostname loaded: Name or service not known
VM: ssh: Could not resolve hostname VM: Name or service not known
which: ssh: Could not resolve hostname which: Name or service not known
stack: ssh: Could not resolve hostname stack: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
fix: ssh: Could not resolve hostname fix: Name or service not known
'-z: ssh: Could not resolve hostname '-z: Name or service not known
try: ssh: Could not resolve hostname try: Name or service not known
highly: ssh: Could not resolve hostname highly: Name or service not known
64-Bit: ssh: Could not resolve hostname 64-Bit: Name or service not known
with: ssh: Could not resolve hostname with: Name or service not known
Java: ssh: Could not resolve hostname Java: Name or service not known
stack: ssh: Could not resolve hostname stack: Name or service not known
Server: ssh: Could not resolve hostname Server: Name or service not known
It's: ssh: Could not resolve hostname It's: Name or service not known
noexecstack'.: ssh: Could not resolve hostname noexecstack'.: Name or service not known
localhost: starting datanode, logging to /home/hduser/hadoop/logs/hadoop-hduser-datanode-sandesh-Inspiron-1564.out

This output is for ./start-yarn.sh

hduser@sandesh-Inspiron-1564:~/hadoop$ ./sbin/start-yarn.sh
starting yarn daemons
resourcemanager running as process 16118. Stop it first.
localhost: nodemanager running as process 16238. Stop it first.

How can I switch between windows of the same application? https://askubuntu.com/questions/18037/how-can-i-switch-between-windows-of-the-same-application

I often have more than ten windows open at the same time and some of them are of the same applications, notably gnome-terminal.

Often when I am currently on one terminal, I just want to get to another terminal. With Alt-Tab you have to choose from windows of all the applications, which is a pain. Even with Gnome3 which groups windows by applications and gives preview of windows with Alt-` it isn't enough because it's hard to distinguish terminal windows from previews. You can only tell which terminal does what when the full view is shown in most cases.

So is there an application/windowing system/gnome shortcut that shows you only other windows of the same application when you are switching?