Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Reid Bundonis

Pages: 1 [2] 3 4 ... 8
16
Sorry for the slow reply   :o  Been one of those weeks...

Quote
I ran a port scan and I can see all the ports open.  No matter what I did I could not get port 81 and 444 to open up on the server.

Did you run the port scan from the LAN or the WAN?  If you are able to hit these on the LAN but not the WAN, then something is still acting as a firewall.  Try running this command on the server:

Code: [Select]
/usr/libexec/ApplicationFirewall/socketfilterfw --getglobalstate
I would expect to see "Firewall is disabled. (State = 0)"

Then for good measure, try:

Code: [Select]
/usr/libexec/ApplicationFirewall/socketfilterfw --getblockall
Once again I would expect "Block all DISABLED!"

On the Access tab of Server.app, is your Default User Access and Default Network Access set to All users and All networks?  If so, are all the ports listed set to All Networks (excluding caching server)?  If so, I would agree that you likely do not have the software firewall enabled.  So that leads to:

Quote
If the only firewall on the system is the one in System Preferences under Security & Privacy then no the firewall is currently not active. I was having a hard enough time trying to get ports 81 and 444 open for FileMaker server.  Currently I do not have FileMaker server installed because I was changing the ports around from 81/444 to 80/443 but I will be installing it back on there tonight because we will need to use it this week for work. I couldn't get port 81 open so I figured out how to comment out the listen ports using your book and installed FileMaker server on port 80 then I was able to access it over the internet.

I am wondering if you are using that route to the server.  In my mind I picture an ISP provided router with a WAN port (connected to the coax or ethernet) going into the router.  This gives the router a public address of 17.18.19.20 (example) that you or the ISP fixed on the device.  In fact, if you asked for 5 public addresses then the ISP's router is listening for all those addresses.  The only way for you to get the server to pick up one of those addresses would be to put a switch between the feed and the ISP provided router.  Then you can assign fixed public addresses to multiple devices.

So here is another question.  Let's say your two public addresses are 17.18.19.20 and 17.18.19.21 (the others are 22, 23, and 24 and they are not yet used).  Which address do you hit the server with from the outside?  And if you think you are hitting the server I suspect you are hitting the router and there are port forwards enabled. 

I think I am speaking in circles here.  My head is going faster than my hands.  If the topology is like this:



Then what is the ???? port configured as?  A secondary WAN port or is it part of the LAN?  If part of the LAN then you are not routing direct to the server.  If a secondary WAN then the firewall on the router must be active. 

By way of testing, disconnect the secondary ethernet on the server while the port 80 trick is in play.  If you are able to hit the server while off your Lan then you are routing through the primary WAN port.

Quote
Well it doesn't absolutely need to have a public address I guess. When we signed up for our internet about 8 years ago I told them we needed static ip's so we had to get a group of 5 static IP's from them.  I currently only use two static IP's one for the router on the network and one for the server.  We have a external sales person who needs to access FileMaker and stuff while out and about.  And I access the server remotely quite a bit from home because I can't do everything that I want to it while I am at work.  The only firewall/gateway is the wifi router that provides internet to the entire LAN over ethernet and obviously wifi. 

I will admit I am getting chills thinking about a server being placed on a public address with no firewall.  While Apple does a good job locking systems down, ports are still active. 

Hmm, now I am rethinking my topology sketch.  You say the wifi router is the firewall gateway?  Did the ISP set their device to bridge mode?  Above you mention the business gateway has the firewall disabled.  Did you mean it is in bridge mode allowing you to define static addressing further down the line? 

I will admit, I am having a hard time wrapping my head around your network setup.  If the ISP router is in bridge, then the server, at the second address, should be wide open assuming you have not software firewall.  However, if it is not in bridge mode, then I suspect the second connection is not routing.  (another test would be to disconnect the server from your LAN and try to browse the internet.  The secondary port will flip to primary making it default route.)

I am going to let you provide some answers.  I believe you can simplify your topology and possibly implement some better security.  If 80 and 443 are coming inbound then there is a device that is granting that port forward.  If 81 and 444 are not coming in, then the same device is not letting those ports pass.

Quote
It almost seemed if you could marry DNS you would haha...

I won't go that far but DNS is a critical service and proper setup solves many odd issues.

Quote
The server right now is mainly used as a FileMaker Server, I know I want to configure it to do more but I have yet to determine what else I want it to do.  Still learning more about servers and what all services they provide.  I know by reading your books I will have a better understanding of Mac servers and can't wait to get things up and going properly.

I guess by installing FileMaker Server on ports 81 and 444 it is technically a little more secure as you would have to type in the port number in the address bar to get to the webdirect side of FileMaker Server.  But I can't get them to open up properly so I guess I will just use ports 80 and 444 by commenting them out of the apache_serviceproxy.conf file for now.

Nice!  Just did the same earlier in the week to get a Rumpus server back on line.

Again, I apologize for the delay.  The rest of the week (fingers crossed) should be better). 

17
Nick,

Thanks for the kind words.  I am glad the book is helping with your deployment.

Before we go on, I will caution that it sounds like you are working on a production system.  Make sure you have a backup.  Be thoughtful about your alterations.  And, despite the fact that your system is not set up as I recommend, it has gotten you this far.  If the machine's primary function is FileMaker, then many of the rules go out the window.  It would be the same as using a server class system for only Rumpus or only Kerio.  While it might be good to set a proper foundation for the possibilities of the future, these third party tools can run mostly in isolation.

Also, regarding FileMaker.  Ports 80 and 443 are only required if you are using the web publishing engine.  So while the installer will yell at you, if you are not using that part of the product, you generally can ignore the message.  As long as you can reach the admin pages you should be fine.

Ok, so let me focus on dual ethernet, DNS, and network setup.

Dual Ethernet:  You describe the Mac Pro as having a public and a private address.  Eth 1 has a LAN address and I assume it is the primary interface when looking at the Network Preference panel (in other words it is at the top of the list).  Eth 2 is set up with a WAN address.  Ok, so some questions.

• Is this server accessible by its public IP address?
• Are you using a firewall to protect and prevent access to the server from the public IP?  Back in 10.6 Server it was relatively easy to implement the NAT feature allowing the server to act as a router.  In 10.11 these features are a bit more obscure.
• Does the server really need to have a public address?  How do your users interact with the server when they are not in the office?  Do you have a traditional firewall/network gateway?

Now the next statement is clearly my bias.  I am not a fan of using OS X as a firewall/router/gateway.  Even in the 10.6 days when it was generally as easy as enabling the NAT service, I would always insist on a true firewall product and then safely deploy server behind the firewall.  This would allow me to selectively port forward to the server if needed and then implement a remote access policy involving VPN.  In our world today there are too many network scanners (open up SSH and count the minutes before you are actively being brute forced with a dictionary attack), vulnerability attacks, and software flaws that can expose your business data.  While no system is 100% secure, the right perimeter security approach can limit your exposure and provide alerts should active attacks occur.

So, what to do with the dual ethernet.  I agree that if it is there, don't waste it.  But as mentioned above, not a fan of using the system as a router.  Instead, if the environment demands, I would aggregate the ports to double bandwidth.  That may be overkill in your environment.  Instead, you can make two LAN connections (effectively multihoming across multiple physical connections).  By making the two connections, you can make the primary address the one for all of OS X's services and then use the second for all other services.  (As a side note, the 1.3 release of Foundation Services adds a section on multihoming - It will be released when Apple drops 10.11.2 which I think will happen this week).

Next is DNS:  I am a bit of a fanatic when it comes to DNS.  I will admit that I get a little too passionate in my opinion of DNS.  I truly believe after all these years building servers that DNS is the key to success.  So your results may not be bad.  If this is an upgrade from 10.6, the presence of the localhost address is easily explained.  To confirm my suspicions, open System Preferences > Network.  Take a look at the DNS settings for your primary network interface (the one at the top of the list).  Is the first DNS server address listed as 127.0.0.1?

127.0.0.1 is the localhost address.  This was a trick that Apple used to pull on a server installation.  Since DNS is a locally running process, the use of the loopback address meant that no routing or link status was required to resolve names to DNS.  If you are getting proper answers to your DNS questions, (for example if nslookup host.domain.tld is returning the proper IP address and nslookup <ip_address> is returning the proper hostname), then DNS is working.  If the reported DNS server is 127.0.0.1 it is because your Network interface lists the loopback address instead of the actual LAN address of the server.  Once again, this is something I dislike because I've seen it cause confusion.  But technically it is functional.

Quote
I don't really know which Ethernet port I need to be setting up DNS on.  I currently have it setup on the LAN (Ethernet 1).

Yes.  The LAN interface.  If you continue to bridge the device, the WAN port will be satisfied by a public DNS server hosted by your domain registrar.  You should not be trying to host a public DNS server.  Not worth the effort.

The idea of DNS on the LAN is to ensure the smooth transition of mobile clients.  This is so much more important today with the growing number of mobile devices.  Here is a simple example.  You are running a mail server on your LAN.  You want to name the server mail.domain.tld and this name must exist on both the LAN and WAN nodes.  If it does not, then your users will need to alter client configuration each time they move from LAN to WAN and back again.  This is too much work.  By keeping the name the same, client devices simply use DNS to route to the appropriate path.  If on the LAN, the LAN DNS says go to this private address.  When on the WAN, a WAN DNS says go to this public address and then your port forwarding rules take over and translate to the server.

The Network:  You mention the server is acting as a router by being bridged to the ISP router.  But the WiFi router is also.  With a business router I assume you have multiple public addresses.  Since you state that both the WiFi device and the server has public addresses I will assume the modem is in bridge mode.  So my question is, does anyone other than the server use its public route?  If you went to any client device on the network and looked at its network stack (likely handed out via DHCP) do you see the server's LAN address in the router field?  I will guess not.  Based on your description of the network topology I will bet the LAN address of the WiFi router is what is listed and it is acting as the default route.  If this is the case, then the server's connection to the public may be an unnecessary connection unless external clients are routing directly to it using its public address.

Quote
Whats your recommendation for my setup?

What ever keeps your business running  :) 

Here are some questions for you to ask yourself.  What services am I running now?  What functionality am I missing that I want to deliver to my environment?  Am I as secure as I can be?  How well am I providing access to my users regardless of their location?  Am I securing the remote access of my users to prevent exposure of data?

If this server is just doing FileMaker and in your wildest dreams you can't foresee the need for additional services, then you could even ditch server and just use FileMaker on a client system.  But I suspect you are running more services.

Anything you do, take small steps and allow time to validate.  Even with the public connection of the server.  I suspect (unless users are accessing via its public address) that this connection is generally unused.  If this is that case, simply unplug the ethernet cable, leaving all configuration in place, and see it anyone screams.  If nothing changes, then that connection is extraneous and likely just a security concern.

Let's start there.  I hope the rest of the book rounds out your thinking and approach to your foundation services.  If the book(s) is/are helpful, please leave feedback and a review in the iBooks Store.  Let me know your thoughts on the topics raised above.  Don't post any public IP information about your environment.  ;)



18
Wow.  I am sorry to hear this.  Trust me, I want you to have the book.   ;D

I've not heard any other reports regarding this.  The book passed Apple's review process so in theory it should be fine.  Do you mind sharing your OS version and region?  Maybe there is some odd issue with a language setting?  Do you disable any fonts on your machine?  Perhaps I am using one that is not active?  Is anything reported to the system log when the open attempt is made?  Do you have a caching server on your LAN?  Maybe the cached copy of the book is corrupt.

Once Apple releases 10.11.2 I will post a minor update to Control & Collaboration.  Maybe the reset of the file will help.

Sorry for the problem.  It is a little disconcerting that Apple was unable to assist beyond giving up and refunding.

19
How nice that Dec 1 is Giving Tuesday!  http://www.givingtuesday.org

November is over and the donation amount is more than double that of October!  Thanks to all readers.  Keep spreading the word and drop a review on the iBooks Store.  Looking forward to an even stronger December.  Happy Holidays to all.

20
Awesome news!  I am very glad it all came together.  As mentioned, I am enjoying great success with both Yosemite and El Cap in net home deployments.  Only one production El Cap due to the US school year as school starts in September and El Cap was not released until end of Sept.  But this summer I upgraded 6 schools to Yosemite and so far so good.  Not one adjustment has been made in any of the districts with the exception of software updates.  Really solid performance and AFP has been humming along like a champ.  Once school still has not done a server reboot...  They asked me to come in before the holiday break and "oversee" the reboot.  Very happy with the deploys.

Quote
Thinking ahead now, do you think that with current Server.app we could have a way to create high availability of Homes on the network? You can also be honest with me and tell me that I am asking apple Server.app to do too much ☺ however we all love pushing the boundaries!

With this scenario that we have setup now, if OD Master fails, users from there will loose connection to their homes or if Replica fails, users from there will loose their connection to homes, until we reboot or in worse case scenario, rebuild and move the network home share from failed server to the new server.

I will admit I've struggled with this for a long time.  Remembering the days in which failover was part of OS X Server... This is a really challenging problem because of the way everything stitches together.  Let's look at it from a naming and path process. 

You create two servers, one named mac1 and the other mac2.  They get a DNS identity such as mac1.elcap.com and mac2.elcap.com.  These are independent systems with unique IP addresses. Next, you chain on some storage that is uniquely connected (direct attach) to each server.  On the storage you define a folder and then the student homes get populated.  Then, you create users and assign users an NFSHomeFolder attribute pointing to a DNS name and a file system path.  The name and the path is unique to the box and trickles down to an IP.  The storage is in isolation to the host. 

So, mac2 decides to go in the weeds for a while.  Ok, under the typical model above you could move the storage to mac1 and change all the NFSHomeFolder attributes for students once on mac2 to now point to mac1.  Or, you can change DNS to point mac2 to the same IP address as mac1, allowing no change in the user records.  Either method can resolve an outage in a relatively short time but it requires physical relocation of the data or possibly a flush of mDNSResponder to recognize the DNS change.  Possible.  Not automatic.

Ah, but what if it is storage that fails and not the host?  If you are not replicating the data or using some form of a shared file system, you are really in a bind.  At one of my schools I have a similar setup that you just did.  I have two servers (minis) each with storage (pegasus) and the load is divided between the two based on grade level divisions.  There is a period in the day where no computers are ever used (lunch/recess).  During that period and at the end of the day I rsync data between the two systems.  It is not live replication but it is always within 4 hours.  By doing so, I can ensure that I can drop an entire server and storage device and still be able to server the whole school.  My plan is to do the DNS swap allowing the remaining server to assume the ID of both servers.  Now that is my plan.  However, I also wrote some dscl scripts to rapidly alter student home folder paths to allow a quick flip from mac1 to mac1.  Luckily I've not needed to implement this.  And I have concerns about OD trying to replicate to itself.  But in theory this should work for the client devices.

Quote
Scenario one, my ideal situation for network homes would be iSCSI attached storage to a file server that OD Master and OD replica could connect to and in case of any one of the OD servers crashes, there would be no loss of network homes.

But you still have the DNS and NFSHomeFolder path challenge.  Even with common storage, if the Mary's home is on mac1 and John's is on mac2, the lost of the host remains the roadblock.  Now, there is likely some round robin DNS games you can play but knowing OS X and mDNSResponder, the machine will start resolving different addresses during a user session.  This is worth a shot in isolation.  The idea being that multiple servers will have the same host name but different IP addresses.  If you were to do this I would recommend multihoming the Ethernet and setting the primary address to fixed names and numbers master = 172.16.4.10 and replica = 172.16.4.11.  Then you would create nethome and point it to 172.16.4.12 and 13.  Then you would assign 12 to 10 and 13 to 11.  This way OD can communicate using 10 and 11 and master and replica while network homes can all be assigned to nethome at both 12 and 13.  In theory, clients will, by law of averages, split the response by the DNS system and each will seek nethome at a different IP address.  But since each student record has a path for nethome as the NFSHomeFolder path it will simply route to were it belongs.  Now, the one challenge here is what happens with a host is unresponsive.  Round robin does not come back to DNS if the destination is unreachable.  This is theory by the way.  I have not tried this (but now I may  ;))

Quote
Scenario two, could be what you propose and I could just break up Students homes on to Replica_1 and Staff/Admin homes on Replica_2, with failovers set up in VM ESXi.

That may work also, especially if you have a failover unit that assumes the role of the failed instance.  If you get that going it might be the best of all solutions since you are just discarding an instance for another one.

Quote
Am I dreaming or could we get this to work?! What are your thoughts?
It’s refreshing to have a great discussion that actually leads to an awesome outcome! Thank you again

It is good to dream and to push.  I think you have some options with the ESXi side that neither of us is seeing quite yet.  I can see some OD synchronization issue with spinning up a replacement instance but if you are much like the schools I work with, modification to OD after the school year starts is minimal. 

I say try it while you have the opportunity to explore.  And glad to help.  As mentioned, making Server a successful product has become my mission.  I truly believe in the product and the role it can play in organizations of nearly any size.

Reid

21
Quote
Thank you again for quick replays and for not ignoring questions!! Very rare these days...

I am committed to my mission.  Your success with Server means more use.

Make sure the ignore ownership box is unchecked for any drive connected to server.   You want to be able to enforce permissions.  Not on the parent home share but on the subfolders.  Otherwise (in theory) any student would be able to access any other students work.

Let me know what you find.  As mentioned, I will replicate in my environment as soon as I am back to lab (Wednesday).

22
Sorry for the slow reply.  Had a full day engagement today so everything else gets put aside.  Ah, nothing like deployments in financial services companies...  Can't download anything because the networks are locked tighter than a drum.

Ok, let's go point by point.

1:  Excellent.  Always better to work with a unified branch that trying to stitch version together.

2:  So 10.11.1 and Server 5.0.15 clearly changed how some things are done when compared with the beta cycle and the initial release.  It is possible you are running into one of those oddities.  I know network homes is part of what I test for the books as it is something I support in the field.  All things being equal, we should be able to get this to work.  More later.

3:  Agree.  SMB has burned me more times that I can count.  While Mac to Mac SMB is almost usable, the debacle of 10.7 through 10.9 nearly got me kicked out of a couple large enterprises.  At the same time the Xserves where being retired and data was being migrated to windows servers, Apple destroys the SMB client.  Not a good couple of years.

4:  Excellent.  Once again, keep everything consistent is easiest for testing.

Quote
..you have to bind the client to Replica, otherwise you would not get the access to network homes that is offered through Replica File Sharing.

Do you think that is intentionally done by Apple engineers or could it be a bug in the Server.app system or is Apple pushing customers away from Network Homes?

With Yosemite and above I recommend always binding to the replica.  In the old days, a machine bound to the domain would respect the entire OD tree and seek out a controller should the primary be lost.  This no longer seems to work.  In order to get the full tree you must bind to the replica.  I don't think Apple ever publicly revealed this tidbit but it really does make a difference.  So add binding to the replica as your standard process.

Now, the next statement is clearly my opinion.  But it is based on a number of observable facts.  I do believe that Apple is making the support for Network Homes more difficult in a number of ways and for a number of reasons.  For example, Apple truly believes that everyone should have her own machine.  The iPad exemplifies this.  There is no user account.  It is yours and you wan do with it what you please.  This mentality is being applied to OS X in a number of ways.  They include DEP and VPP.  The assumption of both of these programs is that the end user is the devices administrator.  There is no unified admin account.  There is no centralized authority that managements and maintains the device.  The end user does these tasks.  With DEP, hand a device to an end user and let them set it up.  If the MDM is setup properly, you deliver everything required via an on enrollment policy.  Then, if you believe that the entire OS X ecosystem exists in the App Store, then VPP once again removes management from the IT group.  You invite the user, assign apps, and the rest takes care of itself.

Now you and I live in the real world.  The world of Microsoft, Adobe, plugins, and edu-tainment software that is not available via the App Store.  We also live in the world were schools can no afford to have a device for every student and device sharing is a requirement.  Apple has established a long history of providing NetBoot and Network home technology.  Heck this stuff was possible with OS 9.  And when you consider the advances made since then (802.11ac vs 802.11b, 1000Base or higher vs 100Base, SSD drives vs IDE, etc) there is no reason NetHomes should not be well supported and highly successful when considering today's wireless networks and device technology.

Ah, but the other question is who is Apple encouraging to make software that is Nethome aware?  If Microsoft and Google are struggling to do it, you can pretty much be sure that all these mom and pops making apps in the App Store are not testing for it.  I fear that Apple's vision of the future is a Mac in every hand and a directory-less deployment.  No binding to domains.  Only enrollment to management.

Ok, you got me on my soapbox.  Back to your issue.

Quote
My frustration comes from Apple moving away from Server software and hardware, when they had a really good thing going with 10.6.8 server and then decide to get the blasted 10.7 and then 10.8 out. I have now decided it is time to let go of Xserve’s and move into 10.11 Server.app space as it looks promising and stable, but.....

I've been there and I've come to terms with it.  But yes... 10.7 was crap.  10.8 was better and we still have many customers on it.  10.9 was the skipped OS for many reasons.  10.10 has proven to be my favorite.  10.11 is growing on me but the fourth quarter of the year is always slow for customer upgrades.  January and February are the months to watch for.  By then I expect 10.11.3 at least and a few fixes to Server.app.

Quote
Yes, we are a school and based in Australia, so my testing time is now until the end of December :) We have 120 iMacs across the college, 3 Xserve’s and 2 Mac Mini servers. For next year, the plan is to go to ESXi VM space with two black vases (Mac Pro cylinders :), so i am vigorously testing and recording my highs and lows haha

I can confidently say that running 10.11 in VM is a breeze and ESXi with vSphere 6 is working nicely.

Aha!  Sorry.  I was assuming US school calendar.  Hello from the other side of the planet.  :)  Very nice on the ESXi setup.  I explored that with some spare 2009 Xserves a while back and was very please with the performance as well.  I never invested in good storage though so it was really just an experiment.  Here is another area Apple could improve.  Imaging if OS X Server could be run on non-Apple hardware!  Oh man, talk about the instant acceptance by enterprise.  Oh to dream.  Heck, I would even pay for that version!

Ok, no really back to the issue.

So, in my mind you should be able to do the following.  I don't have enough gear with me tonight (on the road) so I can not actually test this.  But, this is the junk that rolls around in my head so I have the implementation thought through.

1:  Build the master.  Ensure that DNS is setup for both the master and the fileserver.  Then create an OS Master on the master server.
2:  Build the replica.  Ensure DNS and time are in compliance and create a replica on the secondary server.
3:  Test by doing the following:
a:  Create a user on master and watch it replicate down to the replica - user can be a Services only account.  Let's call this user Chief Master
b:  Create a user on the replica and watch it replicate up to the master - user can be a Services only account.  Let's call this user Carbon Copy
4:  Create a network home share on the master using Server.app on Master - Do not add any ACEs as the POSIX should be enough.
4a:  Start file sharing on the master server
5:  Create a network home share on the replica using Server.app on replica - Do not add any ACEs as the POSIX should be enough.
5a:  Start file sharing on the replica server
6:  On the Master, edit Chief Master and set his home to the net home share available on the Master.  (You will note that the one from the replica is not visible anyway)  Check the net home share and you should see Chief Master's home folder created.
7:  On the Replica, edit Carbon Copy and set his home to the net home share available on the Replica. (You will note that the one from the master is not visible anyway)  Check the net home share and you should see Carbon Copy's home folder created.

8:  Don't start any other services.  Yes, I know, Profile Manager is needed to redirect those cache files.  And Profile Manager can simplify the binding process.  But for now, let's just focus on the core items.  You should have on Master:  DNS primary (if no DNS is available elsewhere), Open Directory Master, and File Sharing with a Network home share available over AFP.  On the Replica you should have:  DNS secondary (optional and if no DNS is available elsewhere), Open Directory Replica, and File Sharing with a Network home share available over AFP.

9:  Now it is time to focus on a client device.  Do the following:  Open System Preferences > Users & Groups.  When you go to bind, press the Directory Utility button and bind the workstation using that tool.  Yep, I know, it all should be doing the same thing... But I just don't trust the simplified System Preference method.  Bind to the REPLICA!
10:  Once the workstation is bound, confirm that you can see the user accounts.  Open Terminal and enter:
id cmaster
id ccopy
Replace the short names with the ones in your test environment.  You should be able to get basic account data for both.
11:  Logout or reboot.
12:  Try to login as each user.  What is the result

So summary stuff.   The creation of the user and the definition of the user as a network home folder user should create the user's home folder within the net home share.  You should not need to login to create it.  If you are getting a partial share I will suspect you have permission problems.  What type of storage are you creating the net home shares on?  If external are you enforcing permissions?  If you are ignoring permissions you will end up with a mess.  If you are enforcing an ACE on the parent you will end up with a mess.

Ok, homework time.  I will stop here as this can give you a start.  Once I am back to my lab I will build this exact scenario, adding any details I may have missed.

Quote
Yes your books are an amazing, easy to understand and helpful insight into OS X El Capitan server environment.

Awesome!  If you have some time, please write reviews in the iBooks Store.  I would appreciate that.  Also, if you catch any errors or mistakes, please let me know.  I will credit you with the correction. 

I will have new releases for Book 1 and Book 2 when 10.11.2 drops.  Book 1 adds a section for multihoming, corrections, and minor additions.  This round for Book 2 will be mostly corrections (dang my feeble editing).  And Book 3 should be out before the end of the year. 

Let's start here.  Look forward to your status report.

Reid

23
Thanks for reading and I hope I can help you out.  I think I can point you in the right direction.  But first some questions.

1A:  Are you working on an all 10.11 test environment?  In other words, are the odserver and fileserver both 10.11 with server 5?  Or...
1B:  Are you working with a 10.6.8 core and a 10.11 file server?  In other words, is the odserver 10.6.8 and the fileserver 10.11?

2:  How is the fileserver associated to the OD domain?  Replica?  Member?

3:  Which file sharing protocol are you using to provide network home folders?  AFP or SMB?

4:  What client version are you trying to login with?  10.6.8 or 10.11.1?

Next some observations.  Back in the 10.6.x days, it was possible to join servers to the domain and allow them to participate are network home folder systems without the overhead of OD replication.  The net home server would publish an auto mount record and WGM would see it allowing you to define home folders on remote servers.  With Yosemite and above, I've never been able to get this to work properly.  Instead, I've been creating replica servers for each remote net home server.  Ah, but it gets a little more funky.  Let's say you have nethome1 (OD Master) and nethome2 (replica).  Both are offering net homes.  Running Server.app on nethome1 will NOT offer the network home folder created on nethome2 as a home destination.  Examining LDAP will reveal the published mount records but Server.app simply does not show it.  You need to turn Server.app on nethome2 and set home folders on the local server.

Next, the protocol.  I know Apple wants to move to SMB but good gravy, it still stinks.  (Truth be told El Cap has improved it quite a bit but...)  So far I have kept all net home deployments on AFP and will continue to do so.  Applications that are known to support net homes (and even those that don't) are more predictable with AFP. 

Ah, and then the bane of us OS X admins... Cross version support.  This is one of those thorny topics that I avoided on purpose.  While you can mix a match OS X Server versions for certain tasks (old OD server with new file server) it all falls apart if you try to get too fancy.  If you are trying to make this work with a mix of OD and OS versions, I fear you will continue to believe you are crazy when you are really just swimming against the tide.  When possible we strongly encourage unified OS and Server versions.  I know it is not always possible, but there are certain limitations that will make you scream.

Now, with a 120 system deployment and 10.6.8, I will bet you are in education supporting units on a cart.  If this is the case, don't try to change course in the production environment between now and June.  This is your time to build a test network and validate the migration to El Cap.  I can say that it is successful.  (In many cases I prefer Yosemite deployments as they are the most stable net homes I can ever remember setting up, even better than Snow).  Do what you can to cobble together some test gear and setup a new OD master and at least one file server.  Bring over a handful of clients and test and test.

Let's start there.  Give me some more details on the environment and let's see if we can devise a plan to solve your challenges.

24
Questions lead to answers and answers lead to understanding.  Before you know it this will all be old hat to you.

"You’re showing me I can do things in a small office I thought were only for enterprise users. Things like single sign on may appear small but create a much simpler system for people to use."

OS X Server is enterprise capable.  Granted, I would not try to make it the core of a 10,000 person business but a 100 person business is still an enterprise.  The scale may be different but the needs remain the same.

"Or I’m just not reading closely enough! Perhaps also I’m getting into trouble as I want to convert users: I have Users already set up but a lot of the book is about creating a new server setup. So sometimes I may skip and miss a bit because I think “I’ve done that already”!"

See page 118 through 122 for Account Migration.  If it is your first time, do it with a test account so you get the process down.  But I commonly go into companies and do the migration from local to domain and I can do a machine every 5 minutes.  No data copies.

And thanks for any and all reviews.  I've been working on the Advances Services book earlier today.  I am trying to have it released by the end of the month.



25
No problem.  So, here is a deeper explanation. 

1:  You are correct in the local user can not participate in single sign on.  The reason is that even though the local workstation user is UID 501 and that local admin may share the same name as the server 501 local admin, the accounts are different.  They have different GUIDs.  In addition, the local account on the server is not "seen" by the shared domain "Open Directory."  This is why when you log into a workstation with a locally created account you must then auth to file services.  While the device is trusted by the domain due to the bind, the user is unknown and thus is prompted to authenticate.

2:  Network Home Folders users (users with home folders on the server) only require binding to the domain.  A simple bind will allow the user to login because the network home folder path is part of the user ID.  It is that attribute that directs the machine to mount the net home share and grant access to the home folder.  Now, since the account is a domain account, access to other resources is transparent because of the Kerberos infrastructure.

3:  Mobile accounts.  I think I know what you are missing.  Mobile accounts need a bit more.  First, let's take the user account.
a:  If you create the account as Local Only, the proper home path will be created in the user account.  For this example, let's use the user John Doe with short name of jdoe.  If you make his account using Local User as the Home folder popup, then a folder is created on the server's /Users folder.  I hate this and thus is why I suggest creating the account as None - Services Only and then editing the account to define a valid home path. In this case, /Users/jdoe. 
b:  Technically, that is all that is needed of the user account.  Ah, now for device trust.  This is binding.  If you are manually binding to the OD domain you are using System Preferences > Accounts or, as I prefer, Directory Utility.  This creates a machine record in OD and forms a trust between the workstation and the server.  By having device trust, we assume things like DNS and time match or are within tolerance. 
c:  Ah, but as you are discovering, setting a valid home folder path and binding to OD is not enough...  In the old days it way.  You would then use OD to define MCX for the mobility payload.  This is not enough to allow a user to created content on the workstation.  You need to set the mobility settings allowing mobile accounts.  Aha you say!  How do I do that?

So you have two methods to make this happen.  Both involve enabling Profile Manager.  Now, you don't need to enroll your devices into profile manager (although in the long run it is easier).  You can enable Profile Manager, create or manage an OD group and define the Mobility setting.  Once you have it defined, you can download the profile and manually install it on your workstation.  But that is a lot of work.

If you want to go the full experience, you will enable device enrollment, enroll your workstation into profile manager, and then deliver the Mobility profile to the machine.  At minimum, you simply need to check the box to allow mobile accounts.  (oh, take a look at the second tab and uncheck the highly annoying auto-logout feature...).

These steps are covered extensively in El Capitan Server - Control & Collaboration.  That is the second book.  The pages you want to look at are:

Profile Manager chapter starting on page 10
Enabling Device Management on page 20
Enrolling Devices on page 33
Setting policy on page 44 - Jump to My First Policy on page 42 for exactly what you are trying to do

Then there is the Users and Groups chapter that tries to cover the myriad of account types in OS X.  And then the Putting is all Together chapter has a complete walk through in the John Q Public - Managed Mobile User section.

Let me know if this helps!  I missed the discussion on the forum.  Sorry about that.  I tried to help the community when I can. 

Hope the book(s) are helping.  Sorry if there is confusion.  If you have suggestions, corrections, or just feedback I will gladly take it.  Also, if you like the books please post a review in the iBooks store.  This world is all about likes and up-votes :)

Let me know if you get unstuck or if you need more help.

Reid







26
Sales for the month of October are final.  Thanks to all readers for your support and contribution.  The first of what I hope to be many donations has been made to the American Cancer Society.  Hoping November's numbers are even better! 

Spread the word.

27
Minor update.  Handful of corrections, new SIP limitations, and the release of Control & Collaboration.

28
Second of three in the El Capitan Series, El Capitan Server – Control & Collaborate, is available on Oct 30th.  Plenty of new content.  Edited and updated for El Capitan.  This is looking like a great release all around.

Working hard to get the final book out.  Getting closer on the third.  Considering a pre-order release.

Writing for a cause.  A donation to the American Cancer Society will be made for every book sold in the El Capitan Series.  Getting all three books out means more chances for the future.

29
Random Thoughts.... / Glowforge 3D Printer
« on: October 21, 2015, 07:04:02 PM »
If this unit is half as amazing as they are promoting I may need to quit my job and just explore #D assembly. 

http://glowforge.com/referred/?kid=BGOhsl

Where was this stuff when I was a kid?  If this comes by Christmas, you can guess what I will be doing with the family.

30
First of three in the El Capitan Series, El Capitan Server – Foundation Services, is available on Oct 2nd.  Plenty of new content.  Edited and updated for El Capitan.  This is looking like a great release all around.

Working hard to get the other two books out.  Both need editing but they are close.

Writing for a cause.  A donation to the American Cancer Society will be made for every book sold in the El Capitan Series.  Getting all three books out means more chances for the future.

Pages: 1 [2] 3 4 ... 8