Added RGB to my previous post on M27. Big crop to take out the rotation between images.
12 x 5 minute subs plus 30% of luminous added in photoshop. Seems OK but a bit grainy.
I was trying to find some decent images of the Mercury transit from the UK. There weren’t that many!
This one looked good:
And then there is the good old faithful Pete Lawrence! (Not sure where from, though)
But this one takes the prize!
Managed to catch the first few minutes of the transit before having to go out.
The first image is poor as it was culled from only a few frames of the raw avi. At least its early, between 1st and 2nd contact.
Similarly the prominences were derived from only a few frames.
The second and third are better, as there was a few seconds gap in the clouds for each image sequence.
I then had to leave – – – !
Worth the effort? Yes, bearing in mind I will be 85 in 2032 – – -! Sporting chance I won’t be here then!
I started off with my ASI120MC, and started packing up after getting plenty of video, but changed my mind after looking at the satellite images of cloud.
It did indeed clear up and I got plenty of DSLR images of the transit after less optimistic souls fled in the face of the rain and cloud! Those who stayed on saw Mercury on the preview screen and we also got stunning views through Andy’s Daystar Quark.
I couldn’t get anything usable out of Registax or Autostakkert so I manually stacked the ten highest scoring images (and added a bit of colour, to make the image easier on the eye):
Just thought I’d put this in before I loose M27 behind the wall!
This is a big crop of the image because I am only using 2 sets of data, Ha and L. The L was taken in September and the Ha last week. Between the two sets,
Lee has realigned the two scopes so there is a 45 degree rotation between them. Hope to get out tonight to get RGB and maybe some new L.
There is 1hr 25mins of L and 45mins Ha. To get the colour I’ve used the Ha as red and the L as both G and B. Seems to have worked reasonably well.
I’d read a few bits and pieces online about Starnet++ – a software module that uses a neural network to identify and remove stars from astronomical images in order to enhance nebulosity and process it separately to stars.
The software is free and you can find it at the link below. It was originally published as a Pixinsight module, but can now be downloaded and run standalone on Windows: https://sourceforge.net/projects/starnet/
I’ve found it’s pretty good- I’ve been playing with it today on an image of the Western Veil I took a few weeks back. This is my original processing of the image:
To use the technique I started again with the stacked file and used Pixinsight to remove the background light pollution gradients, calibrate the colours and do an initial stretch. I then put the image through the Starnet routine and it returned me the image below:
I then used the Clone Stamp tool to clean it all up (possibly more time needed on this!) and tweaked the curves to give it some contrast and got this:
I really like this, but I felt it would be better with some stars blended back in, so I went back to the image I submitted and processed purely to get the brightest stars at a prominence that I liked. I then blended the two images using Pixelmath (in the way I used it here, it’s identical to blending layers in Photoshop or GIMP with lighten):