6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Sun Nov 24, 2024 6:29 pm

All times are UTC




Post new topic Reply to topic  [ 321 posts ]  Go to page Previous  1 ... 10, 11, 12, 13, 14, 15, 16 ... 22  Next
Author Message
PostPosted: Sun May 28, 2017 7:43 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8514
Location: Midwestern USA
Arlet wrote:
In case of something like a house fire, I can quickly yank off the external drive and stick it in my pocket.

Only if you are home when the fire starts. :cry:

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Sun May 28, 2017 7:45 am 
Offline
User avatar

Joined: Tue Nov 16, 2010 8:00 am
Posts: 2353
Location: Gouda, The Netherlands
BigDumbDinosaur wrote:
Arlet wrote:
In case of something like a house fire, I can quickly yank off the external drive and stick it in my pocket.

Only if you are home and a fire starts. :cry:

Sure, but at least it give you a chance to get the most recent backup. Off site is superior of course, but daily rotation can be a bit burdensome.


Top
 Profile  
Reply with quote  
PostPosted: Sun May 28, 2017 7:58 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8514
Location: Midwestern USA
Arlet wrote:
BigDumbDinosaur wrote:
Arlet wrote:
In case of something like a house fire, I can quickly yank off the external drive and stick it in my pocket.

Only if you are home and a fire starts. :cry:

Sure, but at least it give you a chance to get the most recent backup. Off site is superior of course, but daily rotation can be a bit burdensome.

Because my servers have business data, as well as personal stuff like my POC projects, I'm meticulous about daily media rotation. I keep the tapes in two UL-listed media safes, which I know from past experience can easily survive a fire, flood or in the case of one client, having the building reduced to sticks and bricks by a tornado. The safes are located on the lowest level in the building...just in case. They are also quite heavy. I don't know if one could be picked up by a tornado, but I can assure you if a tornado is headed our way I'm not going to hang around to find out. :D

There is no one backup strategy that works for everyone. For example, while the USB drives have high capacity and are much more trustworthy than they were in the past, they aren't fast enough for high volume daily backups, such as what many businesses would be doing with their servers. That's why all the servers we ship come standard with LTO-5 tape drives. The tape itself is probably no faster than the physical medium used in a USB drive. However, the SAS interface that attaches the LTO drive to the system operates at a sustained speed of 600 MB/second, which is far faster than even USB 3.0. Nevertheless, for home or light business use, the USB drives have been a big improvement over previous technology.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Sun May 28, 2017 6:29 pm 
Offline

Joined: Sat Dec 13, 2003 3:37 pm
Posts: 1004
BigDumbDinosaur wrote:
"Backing up" to the "cloud" is not backing up. If the machine that has been backed up goes completely DOA due to a failed disk, you cannot restore it from an off-site backup because you won't have any Internet access, because you won't have an operating system loaded.

Nonsense. Load the OS and then recover over it. By that same logic, you can't consider a local tape backup as a backup.

And, of course if you have an Apple product, you CAN restore directly from the internet. If you have an iPhone, and back it up to iCloud, and drop the phone in to a vat of acid, upon purchasing a replacement iPhone that device will be properly restored. You can similarly do that with your Mac laptop or desktop.

Carbonite, a well know cloud backup service, has instructions to make a bootable recovery CD to restore your machine. Or, buy a new machine, get it "booted", and launch their recovery software. If they have a Linux option for recovery, you can get a recovery Linux disc/key fob to boot your machine and do the recovery -- even if you're recovering a Windows installation. If you're not connected to the internet, then you have different issues. But that's just the truth of living in a connected world.

Would it suck to restore 2TB of OS, software, movies and photos from a cloud backup? Oh yea, it sure would. But that's not to say it isn't possible.

In a similar anecdote, at work, I had a bad memory stick that corrupted my Windows boot volume. Fortunately, I had a Linux partition that I had just hanging around. I booted that up, downloaded Java, downloaded my IDE, installed SVN, and downloaded the source code I was working on. After a quick lunch time trip to get new memory stick, I was back up and running, albeit on Linux. So, in that sense, I had an "ad hoc" cloud backup (i.e. the source code of my work). I used "the internet" for my tools, and the local SVN server for my "data".

I run Macs Time Machine to an external drive, so its backed up every hour. Time machine is awesome. If my house is struck by a meteor, well, I'm SOL.

What I should do is a monthly or quarterly back up to a disk and take it off site. But simple truth is, I have yet to run out of space for photos on my phone, and so I won't be losing those.

As far as code, I have a local SVN repository set up, but I host it on my DropBox volume. So, every time I commit, it's zipped up to "the cloud". I have several GBs of free storage from DropBox. If you don't like DropBox, Google Drive is 15GB and free.

Can DropBox go out of business? Sure it can. Can it vanish over night? Sure it can. But it's an offsite mirror, so whatever I have local is archived off site as well. DropBox can "go away" and I haven't lost anything.

And if DropBox vanishes at the same time a meteor hits my house, then, yea, my source code is in for a bad day.

Another option for source code is to set up a free source code repository on BitBucket. BitBucket allows for free private repositories, GitHub only allows for free public ones.

I haven't done this, I'm content with my SVN/DropBox solution.


Top
 Profile  
Reply with quote  
PostPosted: Sun May 28, 2017 8:42 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8514
Location: Midwestern USA
whartung wrote:
By that same logic, you can't consider a local tape backup as a backup.

Actually, the backup software I use on my Linux boxes can generate a bootable tape that is compatible with all currently produced LTO tape mechanisms. I don't use that method in my installation, but some of my clients do. The one client who uses RDX cartridges can boot a machine with completely empty disks from a cartridge and do a full restoration as soon as a BASH prompt appears. That, my friend, is the ultimate in bare-metal recovery.

The problem with the so-called "cloud" method is you don't have physical possession of the backup medium. In fact, you have no guarantee that the backup medium will be immediately available in an emergency situation (read the fine print that you agreed to when you signed up for the service). Your only link to the backup medium is through a faceless, third-party organization that is accessible only through the Internet. I won't even mention the ridiculous amount of time it would take to do a full restoration through an Internet connection.

In the USA, if your company has to conform to HIPPA laws, exclusive use of a "cloud" backup does not comply with security requirements, as the company doesn't have physical possession and control of the data. HIPPA requires that the company directly maintain possession of the data at all times. That is only possible with backups generated on local media. Furthermore, certain organizations subject to HIPPA requirements cannot, by law, use an off-site, third party backup method under any circumstances, as doing so puts sensitive client data into the hands of people who are not authorized to possess it.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Sun May 28, 2017 8:51 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10986
Location: England
I wouldn't think most of us here are subject to such constraints.


Top
 Profile  
Reply with quote  
PostPosted: Sun May 28, 2017 8:55 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8514
Location: Midwestern USA
BigEd wrote:
I wouldn't think most of us here are subject to such constraints.

We aren't, of course, but when it comes to something as important as data backup, I would never trust strangers, even if the data was strictly for personal use.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Sun May 28, 2017 9:02 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10986
Location: England
Of course, you must act according to your preferences, but you should note that your pronouncements often sound absolute, and then turn out to mean something else.


Top
 Profile  
Reply with quote  
PostPosted: Sun May 28, 2017 11:01 pm 
Offline
User avatar

Joined: Tue Mar 05, 2013 4:31 am
Posts: 1385
BigDumbDinosaur wrote:
In the USA, if your company has to conform to HIPPA laws, exclusive use of a "cloud" backup does not comply with security requirements, as the company doesn't have physical possession and control of the data. HIPPA requires that the company directly maintain possession of the data at all times. That is only possible with backups generated on local media. Furthermore, certain organizations subject to HIPPA requirements cannot, by law, use an off-site, third party backup method under any circumstances, as doing so puts sensitive client data into the hands of people who are not authorized to possess it.
[/quote]

Actually, it's HIPAA (Health Insurance Portability and Accountability Act) and not all cloud (a fluffy term for sure) providers are equal. Also realize that for HIPAA compliance, it's more than just hardware, software is also required to have HIPAA compliance. Here's a link for IBM's SoftLayer's HIPAA compliance:

http://www.softlayer.com/info/hipaa

I was also just informed from one of our (IBM) reps that BPM and ODM are now HIPAA compliant. The lack of this prevented selling software and solutions in the healthcare industry, before I retired a couple years ago. Finally this becomes a reality, but of course, mostly out of context for this post, but at least it's additional information for Cloud and HIPAA compliance.

_________________
Regards, KM
https://github.com/floobydust


Top
 Profile  
Reply with quote  
PostPosted: Mon May 29, 2017 4:13 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8514
Location: Midwestern USA
floobydust wrote:
Actually, it's HIPAA...

I seem to routinely misspell HIPAA as HIPPA...that 'P' just wants to take over. :D

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Mon May 29, 2017 9:30 am 
Offline
User avatar

Joined: Tue Mar 02, 2004 8:55 am
Posts: 996
Location: Berkshire, UK
I pay github a small amount each month for a private area in my repository. Most of my projects are of academic interest and have no commercial value so I keep them in the public half of my account (like the SXB hacker programs and emulators). I use the private half for projects that are in 'stealth' mode (i.e. incomplete code that is not ready to go public) or potentially commercial (not many of those).

When my development laptop's hard drive broke a couple of weeks ago I recovered all my active projects from github. I just lost a couple of minor edits I had not committed.

_________________
Andrew Jacobs
6502 & PIC Stuff - http://www.obelisk.me.uk/
Cross-Platform 6502/65C02/65816 Macro Assembler - http://www.obelisk.me.uk/dev65/
Open Source Projects - https://github.com/andrew-jacobs


Top
 Profile  
Reply with quote  
PostPosted: Mon May 29, 2017 3:20 pm 
Offline

Joined: Sat Dec 13, 2003 3:37 pm
Posts: 1004
Our company has been hosting health care services and PHI data in virtual data centers for clients for over 15 years. Data centers where we barely have physical access, much less our clients. Our offices on in Southern California, the production data centers are in the mid-west, and we have clients nation wide. If we want hands on in to the data center, we're getting plane tickets. I myself have never seen them. I know of only two occasions where our operations people have been out to those data centers, and one of them was when we migrated from a SoCal facility.

I can't speak to the specifics of HIPAA (and I write HIPPA all the time as well), but whatever we're doing seems to appease both our clients auditors as well as our own.

That bootable tape unit sounds very nice, but its reminiscent of when I first installed FreeBSD on my PC years and years ago. I downloaded a 2 floppy boot set, and installed the entire OS from the internet. It also recalls my first "aha" moment with networking. I had minimal exposure to it at the time, but I went over to a clients office for some reason, and they were making a tape for us.

The tape was on one machine, and data on another. He simply cpio'd the data in to a pipe to rsh to cat to the tape drive. Seeing that turned a lot of lights on in my head.

I appreciate that this tape unit you're talking about can portray itself as a random access block device to the host so that magic can happen. Still, pretty neat.


Top
 Profile  
Reply with quote  
PostPosted: Thu Jun 15, 2017 5:20 am 
Offline

Joined: Sat Mar 11, 2017 1:56 am
Posts: 276
Location: Lynden, WA
Ok, I've been slacking on the project a bit, but I'm back at it.

I am currently working on my monitor(although I think its going to morph into a full blown OS if I implement everything I have in mind)

I'm contemplating how to parse arguments for commands.

I can think of three strategies.

Strategy 1:

I scan for spaces as the keys are being pressed. This way I have separate buffers for the command, and each argument. The only real draw to this method is that my current command recognition routine could remain virtually unchanged. The downside is twofold. It requires a buffer set aside for as many arguments as any command might need. Also, if I want to be able to backspace destructively for typos, that's gonna get messy. I don't like strategy 1.

Strategy 2:

I store the entire typed text in one buffer. I now have to alter my command recognition routine, so that it looks for spaces, and starts interpreting everything after a space as a new argument. I don't love this because as I started working it out, I realized that the way my command recognition routine works, someone could partially type a command, and then a space, and it would be recognized as a valid command unless I changed a bunch of stuff. Nothing difficult, but it stopped being as slick as I like it. i like this strategy more than strategy 1, but I don't love it.

Strategy 3:

I store the entire typed text in one buffer. Then, before I enter the command recognition routine, I scan the buffer for spaces, and strip out the arguments into individual buffers. I also use this opportunity to count the arguments, so that its easy to throw a "too many arguments" error, or conversely, if a command that requires arguments is typed with none, I can do the old "list the valid possibilities in a handy help screen" trick. I really like strategy 3.

Anything to add? I know how I'd code all three versions, so its a question of pros and cons.

Also, instead of buffers, I was wondering if some sort of software stack for arguments would be a good way. (Gotta think Garth would like this plan). That way, I don't have to empty buffers. So maybe I'd push both the characters, and a character count onto my stack.

Mostly thinking out loud here.


Top
 Profile  
Reply with quote  
PostPosted: Thu Jun 15, 2017 6:18 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8514
Location: Midwestern USA
Dan Moos wrote:
Ok, I've been slacking on the project a bit, but I'm back at it.

Hey! No goofing off around here! :evil:

Quote:
I am currently working on my monitor(although I think its going to morph into a full blown OS if I implement everything I have in mind)

Just a curmudgeonly opinion, but your M/L monitor should be a stand-alone application—running in the same ROM as the operating system, perhaps, but still separate.

The operating system and the monitor play two very different roles, as the monitor is a user application and the operating system is not. If the monitor needs an operating system service, such as getting input from the console, it should request it via a formalized API to the operating system and should not have to have any knowledge of what is going on in the operating system to make such a call. This reasoning is based on a fundamental principle of computer science, often referred to as the "Chinese wall," in which applications and the operating system remain two distinct entities and communicate only through an API. Microsoft violated that "Chinese Wall" principle when they developed Windows 95, 98 and ME (e.g., the desktop shell making direct entry into the kernel, rather than through the formal API), resulting in an unstable environment that is easily attacked by outside entities.

Quote:
I'm contemplating how to parse arguments for commands.

I can think of three strategies.

Strategy 1:

Strategy 1 is awkward and may limit the number of arguments that can be passed with commands. The problem comes with argument length, which is variable, and buffer size, which would be fixed, unless you plan to allocate buffer space on the fly (more on this below).

Quote:
Strategy 2:

I store the entire typed text in one buffer. I now have to alter my command recognition routine, so that it looks for spaces, and starts interpreting everything after a space as a new argument.

That is the basic strategy used in most shells. The shell itself parses for the command word and the code that is executed to carry out the command parses for arguments. The command word and the following arguments, if any, are separated from each other by "whitespace," which is minimally defined as a horizontal tab ($09) or a blank ($20)—other characters, such as a comma, could also be considered whitespace. The entire typed command, consisting of the command word and optional arguments, is terminated by a null so the parsing function can find the end of the character string. Your parser should also know how to deal with leading, trailing and redundant whitespace, which is trivial to implement, even in assembly language.

Quote:
Strategy 3:

I store the entire typed text in one buffer. Then, before I enter the command recognition routine, I scan the buffer for spaces, and strip out the arguments into individual buffers. I also use this opportunity to count the arguments, so that its easy to throw a "too many arguments" error, or conversely, if a command that requires arguments is typed with none, I can do the old "list the valid possibilities in a handy help screen" trick. I really like strategy 3.

That would work, but since memory is not inexhaustible, and since you don't know in advance how many arguments will be passed and how many bytes in length each argument will be, you may find yourself writing a complex memory allocation function to deal with these matters. Parsing the input buffer and passing some buffer pointers to the command execution code demands little more memory than needed by the buffer itself. At least in an M/L monitor, there is little or no need to save arguments once they have been processed. So why expend memory to store them separately from the input buffer?

Quote:
Also, instead of buffers, I was wondering if some sort of software stack for arguments would be a good way.

I don't think that would be a good strategy. A command line is a character string and hence a single data entity. In the scenario you are developing, you will read the string from left to right and process each "word" as it is encountered. That is not something that lends itself to stack storage.

On the subject of issuing commands within the M/L monitor, the classic design uses single letters to select commands, such as A for assemble code, or M to dump memory. Doing so greatly simplifies parsing.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Fri Jun 16, 2017 12:40 am 
Offline

Joined: Sat Mar 11, 2017 1:56 am
Posts: 276
Location: Lynden, WA
I think my strategy three is actually in line with what you said. I just described it poorly.

Basically, I'd continue have a single command line buffer. Then, I'd run the string though a sort of of pre processor routine that basically looked for spaces, and created pointers to each argument.

I only threw in the stack notion as a what if. In my head, implementing it only added complexity, but I figured since it was out of my gut thinking, maybe I was missing something.

I'm sticking with full word commands. I already have that portion working for one. Wasn't hard at all. Mostly though, I just think it's cooler. This whole thing is "just for hell of it", so having full word commands adds miniscule complexity but, to me, makes it way cooler.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 321 posts ]  Go to page Previous  1 ... 10, 11, 12, 13, 14, 15, 16 ... 22  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 3 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: